An analysis carried out by the US government has raised concerns about the EU’s AI Act, warning that the legislation put forward by the bloc could end up curbing  investment in the technology and favoring large AI companies at the expense of smaller rivals.

The analysis, carried by the State Department, also warned that the EU regulations are likely to dampen “the expected boost to productivity” expected from AI-based technology and potentially lead to “a migration of jobs and investment to other markets,” according to a report from Bloomberg.

Furthermore, as-yet unpublished documents related to the analysis describe some of the rules in the EU’s AI Act as “vague or undefined.” Similar criticisms, though, were levelled at the US government in July after President Joe Biden sought "voluntary commitments" from seven leading AI tech companies to help address safety, security, and trust issues associated with the technology.

Meanwhile, investment in AI research, development and commercialization within the EU is likely to suffer, according to the State Departmen analysis, with the resource-intensive nature of training large language models (LLMs) limiting the competitiveness of European firms. The cost of compliance with the AI Act could mean that only the largest tech companies would have the resources to train machine learning systems, according to the analysis.

The State Department declined to comment.

"We do not comment on purportedly leaked information," according to a statement sent by a State Department spokesperson. "Through our strong and continuous partnership with the European Union, we seek digital solidarity on key bilateral issues."

While the US government has yet to publish a legislative plan for AI that’s as comprehensive as its European counterpart, it appears that so far the US is taking a risk-focused approach to regulating how AI models are being used, looking mainly at possible negative outcomes. In comparison, while the EU has taken the risk of AI into account, its legislation is more heavily focused on how AI models are being developed.

The European Parliament approved a draft of the EU's AI Act in July, although each EU country needing to agree on the bill before it becomes law.

The legislation currently includes a requirement for generative AI systems to comply with transparency requirements by disclosing if content was AI-generated, and helping to distinguish deep-fake images from real ones.

However, some of the more controversial rules that are expected to see some backlash include the total ban on biometric surveillance in public settings and so-called “social scoring” systems, which classify people based on their social behavior, socioeconomic status, and personal characteristics. Members of Parliament that make up the center-right European People’s Party faction of the body have argued that a total ban on biometric use could hamper crime-solving and counter-terrorism efforts.

IT World