Site icon Fundorica

The Quiet Potential Of Biden’s AI Executive Order

As the global race to regulate artificial intelligence is heating up, President Biden took a significant step forward for the industry in the U.S. with an executive order signed earlier this week. The federal government has been working as quickly as it can, which is not very fast compared to most other organizations, to get its arms fully around this novel technology. New rules for AI have yet to emerge from Congress as lawmakers have struggled to find a consensus on what the best approach is and what issues matter the most. Biden’s EO is more limited in scope than a new law from Congress but still has the potential to be transformative for the future of AI in the U.S.

The EO represents a break from Biden’s previous initiatives on AI. Before, the focus was securing nonbinding voluntary commitments from the field’s largest companies. These actions were important for the White House to be seen as taking some action on the issue, but they carried little real significance and were largely symbolic. Still, it helped build momentum behind the Biden administration’s efforts on AI to eventually lead to this week’s EO.

Unlike these nonbinding commitments, the new EO will seek to impose new regulations, but not directly on private industry. Instead, the EO directs federal agencies to write rules governing their use and interactions with AI. On the surface, this may seem like a minor area to regulate. However, the impact of it will come through the federal government’s large procurement of these technologies. The expectation is that the standards created by the agencies will create de facto national standards as AI businesses will want to ensure they can sell their products to the government.

Signing the EO will bring little immediate change as agencies will now have to go through the process of crafting their rules. This implementation phase is expected to play out for much of the next year, meaning the full effect of Biden’s EO will likely not be seen until 2025. In the interim, it will be essential to continue tracking the state of progress at these agencies and what restrictions they may be proposing. The White House will want to be as consistent as possible across the federal government, so when the first proposals start to emerge, likely next spring, these will be most important as they will set the table for later measures.

Biden’s EO is not the first to take this approach to regulating AI. A similar strategy has been championed by Senator Gary Peters (D-Mich.) in Congress and has seen some success in recent years. AI has not been as much of a popular focus before this spring, but the legislation he has proposed has not been controversial with other lawmakers. As Peters’ ambition grows and AI becomes more ubiquitous, even these laws that previously could pass without much trouble could become more divisive as the two parties create red lines for AI legislation.

Peters is looking to use his position as Chair of the Senate Homeland Security and Governmental Affairs Committee to push more of these proposals through this year. His current bills include measures to require transparency in the federal government’s use of AI, to create training programs on AI for federal officials, and to force federal agencies to designate a point person for AI. Peters’ best chance will likely be to hope one or more of these proposals can advance as part of one of the larger, year-end legislative packages, such as an appropriations bill or the National Defense Authorization Act, as Congress is doubtful to pass many other bills before the end of the year.

Read the full article here

Exit mobile version