President Biden is set to issue an executive order that will introduce the first federal regulations on artificial intelligence (AI) systems in the United States. The regulations will require advanced AI products to undergo testing to ensure they cannot be used in the production of biological or nuclear weapons. The findings from these tests will be reported to the federal government. The order will also recommend that media created by AI systems, such as photos, videos, and audio, be watermarked to indicate that they were generated by AI. This is in response to concerns about the potential for AI to facilitate the creation of “deep fakes” and disinformation. The regulations will also require cloud service providers to disclose information about their foreign customers to the government. The order comes ahead of a global summit on AI safety and reflects the US government’s efforts to catch up to other countries, such as the European Union, in implementing AI regulations. The order aims to protect Americans from potential risks associated with AI while also promoting its benefits. The regulations will set standards for safety, security, and consumer protections in the technology sector. The order instructs various government agencies to develop safety standards for AI use, study the impact of AI on the labor market, and prevent discrimination caused by AI algorithms. However, the White House’s authority is limited, and some of the directives in the order are not enforceable. The order calls for enhanced privacy legislation to protect consumer data and requests that the Federal Trade Commission (FTC) strengthen consumer protection and antitrust regulations. The tech industry generally supports AI regulations, and some companies have already committed to voluntary safety and security measures. The order also aims to support US companies in the global race for AI leadership by streamlining the visa process for AI experts. National security regulations will be outlined separately, with classified measures to prevent the exploitation of AI systems by foreign nations or nonstate actors. Lawmakers and officials have urged caution in drafting AI laws due to the rapidly evolving nature of the technology.