Key takeaways
- The Biden administration and corporations are tackling AI misinformation with blockchain, amidst growing deepfake technology outpacing detection efforts.
- The absence of comprehensive federal laws and the slow pace of regulatory responses highlight the urgent need for effective measures against AI-generated misinformation.
- Saudi Aramco signed an agreement with droppGroup to build Web3 applications to help Aramco’s employees.
As deepfake technology advances, the Biden administration and corporations like Saudi Arabian Oil Group (Saudi Aramco) are mobilizing to verify communications and pioneer blockchain to stem the tide of artificial intelligence-powered misinformation. However, the pace of detection and regulation lags behind the advancement in AI and deepfake technology.
The White House is working urgently to ensure the authenticity of its statements and videos as cutting-edge deepfakes undermine trust in institutions. Recently, an illegal AI-generated robocall mimicked President Biden’s voice in an attempt to deter voting.
In response, Ben Buchanan, Biden’s advisor on AI, revealed White House plans to “cryptographically verify” all communications using technology to certify real videos or documents. This initiative follows the explosive growth of AI, like ChatGPT, that can efficiently create remarkably realistic fake multimedia.
The pledge for verification stems from a deep concern about public manipulation. Deepfakes depicting celebrities and women without consent are flooding social media, predominantly made for revenge or pornography. Multiple states have moved to ban deepfake pornography specifically, but enforcement is inconsistent.
Saudi Aramco partners with droppGroup
While governments play catch-up on deepfake threats, major corporations like Saudi Aramco are charging ahead with their partnership with droppGroup on Web3 and blockchain pilots to help employees.
Potential initiatives span tokenized incentive programs as well as blockchain-based training and onboarding. With $2 trillion in assets, Aramco’s efforts signal growing mainstream adoption, moving blockchain from speculation to reality.
The applications will initially aim to assist Aramco’s employees. Planned offerings include blockchain-based onboarding and training ecosystems to ease entrance and growth for personnel. Also on the table are tokenized incentive structures to motivate and reward workers.
Calls for deepfake regulation mount
Despite corporate blockchain progress, unresolved dangers of AI, like deepfakes, loom amid lax regulations. Beyond personal violations, deepfakes could manipulate markets or public opinion around events such as Russia’s ongoing war in Ukraine.
The United States lacks an overarching federal law explicitly banning deepfake production or distribution. Pending EU regulations would force platforms to label AI-generated content as synthetic. But without batsignal indicators, constantly evolving deepfakes may evade protected groups.
Startups and tech giants like Intel are making advances in deepfake detection through AI and other analytics. However, identification technology trails the viral spread across social channels. The result is a dangerous gap emerging. With democracy and rights threatened by unchecked AI risks, pressure is rising for impactful countermeasures from government and industry leaders.
Conclusion
As a worldwide pioneer in Web3 technology with AI and machine learning, droppGroup knows how to apply Web3 tools to the AI field. Together with Saudi Aramco, droppGroup created various Web3 technologies to assist Aramco’s staff, combining the advantages of Web3 and AI. Their goal is to build blockchain-based onboarding and training ecosystems to ease entrance and growth for personnel.