"We'll see more technology change in the next 10 years or even in the next few years than we've seen the last 50 years. That has been an astounding revelation to me, quite frankly," US President Joe Biden said in remarks from the White House.
Amazon, Anthropic, Google, Inflextion and Microsoft also voluntarily agreed to the set of standards.
Among the standards is a commitment that those leading technology companies will ensure their products are safe before they are introduced to the public.
"This is a serious responsibility. We have to get it right and there's an enormous, enormous potential upside as well," Mr Biden said.
AI has captured human fascination after viral instances where tools create human-like text, deepfake voices and generated images showing a person's likeness.
But it has also brought concern over the rapid development of the industry.
"Social media has shown us the harm that powerful technology can do without the right safeguards in place," Mr Biden said.
Tesla founder Elon Musk and Apple co-founder Steve Wozniak were among hundreds of technology leaders that signed a letter calling on big AI experiments to be paused until a set of safety protocols is developed.
The US Congress is also exploring steps it can take to pass legislation that would regulate AI's development.
Giving evidence before Congress earlier this year, OpenAI chief executive Sam Altman spoke of the necessity to regulate AI.
“I think if this technology goes wrong, it can go quite wrong,” he told politicians.
UN Secretary General Antonio Gutteres also warned of the potential dangers of AI and suggested a multilateral body should be created to regulate it.
Meanwhile, the EU is set to debate a bill that would impose a far-reaching set of rules for the technology later this year
“Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that continuing discussion,” said Anna Makanju, OpenAI's vice president of global affairs.