The G7 summit in Rome has concluded with a historic joint declaration on artificial intelligence governance. Leaders from the US, UK, France, Germany, Italy, Japan, and Canada agreed on a unified regulatory framework that sets minimum safety standards for frontier AI models.
Key provisions of the agreement include mandatory safety testing before releasing any AI model with capabilities exceeding defined thresholds, transparency requirements for training data, and the establishment of an international AI safety institute.
US President addressed the summit saying, "We cannot allow the race for AI dominance to outpace our ability to govern it safely. This agreement is a first step toward ensuring AI benefits all of humanity."
China and India, which are not G7 members, were invited as observer nations and expressed willingness to align with parts of the framework, particularly around safety testing protocols.
Critics from the tech industry argue the regulations could stifle innovation, while civil society groups say the standards do not go far enough to address bias and job displacement concerns.