Subscribe to get latest news delivered straight to your inbox


    International Regulation of AI Development and Application: Is it Feasible?

    • 15.07.2023
    • By Hugh Stephens
    Hugh Stephens Blogs

    It seems that every day a new report emerges regarding concerns with artificial intelligence (AI) and how it will likely impact our lives. There have been dire suggestions that unless something is done, one day AI will take over, resulting in the end of humanity. There have equally been suggestions that, as with other technological advances in the past, AI will be accepted, normalized and incorporated into daily life. Society will adjust. Increasingly, however, there seems to be a recognition that there is an important role for government to play in creating the playing field on which AI will operate. In other words, establishing the rules of the game in order to control the worst excesses of unregulated AI.

    A major question is how stringent those rules should be, and how to harmonize them between and among governments. Just as competition between companies is rapidly driving AI development, with some enterprises trying to gain competitive advantage by releasing AI applications early in the development phase while exerting very little control in the form of filters or limitations, equally there is a risk that among governments there will be a race-to-the-bottom as competing jurisdictions seek to present themselves as the most “innovation-friendly” in a bid to attract jobs and investment. The solution is likely some form of international regime to set minimum standards and ensure transparency. The situation today in some ways resembles the copyright free-for-all that existed in the 19th century when it was every nation for itself. In the end, the benefits of universal standards and application were recognized, leading to the first international copyright convention, the Berne Convention of 1886. Since then, there have been many international agreements covering a wide range of fields, with international institutions growing out of the UN system.

    Arguments have been made by AI industry leaders, like Sam Altman of OpenAI, for the creation of an international entity like the International Atomic Energy Agency (IAEA), to manage AI. (Others have argued that the IAEA model is not appropriate). Any form of internationally agreed regime is a far-reaching goal which, if it ever happens, will be preceded by more limited governance arrangements between leading creators and users of AI. Groups like the G7 and G20 have already issued statements on AI.  The G7 Leaders’ Statement issued at Hiroshima in May, included as one of its key objectives the intentions of the member states to “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.” To give effect to this, a working group will be established in cooperation with the OECD and the GPAI (Global Partnership on Artificial Intelligence) for discussions on AI before the end of 2023. These discussions could include topics such as “governance, safeguard of intellectual property rights including copy rights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilization of these technologies.”

    The GPAI, referenced above, is a self-described “a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities”, comprising 28 member states plus the EU. There is no shortage of “talk-shops” to move toward some form of international regulation. When and to what extent these fora will succeed in producing some binding principles or rules remains to be seen. Until that happens much of the action will be at the national or regional level.

    The EU is already taking a first step with the approval this month by the European Parliament of the AI Act, the first step in a long process to bring legislation controlling AI into force. It will need to clear the Trifecta process, meaning it needs to be approved not only by the European Parliament but also the Commission and EU Council, and there will likely be changes as it goes through the process. The current version incorporates not only controls on outputs but also deals with inputs, requiring firms to publish summaries of what copyrighted data is used to train their tools in order to address the issue of unrestrained scraping of websites containing copyrighted content.

    Outputs and inputs are the two big issues, with most of the attention to date being focussed on the former. When it comes to controls on AI generated content, concerns have been expressed about AI’s potential to incorporate racial or gender-based bias into algorithms, or to spread misinformation or disinformation. It has also revolved around how AI is used in applications that could violate personal privacy, such as facial recognition technologies. These are all very legitimate concerns that are addressed in draft legislation being discussed in a number of countries, including the EU’s AI Act, but to date insufficient attention has been given to the questions of AI inputs. The EU Act is one of the first to specifically address this dimension.

    (Canada’s Artificial Intelligence and Data Act, abbreviated as AIDA, is long on potential output problems but barely touches on the input question with the exception of a suggestion that AI systems should keep documentation regarding how requirements for design and development have been met and to provide documentation to users regarding datasets used).

    The question of whether AI technology developers are free to help themselves to any and all data that they can access on the internet is far from settled. In the US, there are those who loudly proclaim that accessing data (including copyrighted and proprietary content) to feed AI algorithms is fair use, but that is a questionable assertion and remains to be decided legally. A number of countries have copyright exceptions for text and data mining (TDM) but there are a number of limitations in these cases, for example requiring that the output be used only for research and non-commercial activities. (Canada does not currently have a general TDM exception). Also to be taken into account is the fact that licensed databases for use in AI development are available in a number of content areas, such as publishing or news content, an argument for limiting the “help yourself” process currently being practiced. And then there is the issue of what data should be excluded, for example, personal health records.

    One tool to manage the unlicensed content being used to feed AI machines is to require that tech companies maintain and publish transparent data on the material they have accessed, allowing rights-holders and individual citizens to identify whether and to what extent their content or personal data is being used without permission. While such information could be used to allow rights-holders to opt out of having their content accessed, the onus nonetheless should be on those taking the content to secure permission in advance. Even if a documentation requirement is implemented, a great deal of damage has already been done because existing AI models have already ingested vast quantities of content, almost all of it unauthorized.

    While the EU is in the forefront, many jurisdictions are grappling with the AI issue. In Britain, the Sunak government is seeking to stake out a claim for the UK as a leader in AI regulation while making it a “go to” place for AI investment and development. Having “freed” itself from the “shackles” of EU membership, the UK wants to portray itself as a responsible middle ground between EU regulation and the still largely unregulated free-for-all that exists in the United States. (It’s not that Congress is not aware of the range of issues presented by AI. The question is whether anything is attainable in the current highly partisan political atmosphere.) Sunak has proclaimed that Britain will host a summit on AI regulation in the fall (focussed on safety), and has got the Biden Administration to agree to participate.

    A coordinated approach among governments is essential to avoid the “weakest link” problem. The weakest link would be quickly exploited by some AI developers to gain an unfair advantage. This is already happening with respect to text and data mining (TDM) laws where some countries, Japan being one example, are being accused of throwing creators and rights-holders under the bus in the name of promoting innovation. (There is, however,  some question about how loose the Japanese TDM exception actually is). Singapore is another country where content concerns seem to play a very quiet second fiddle to the cacophony of tech interests. In the UK, initial misguided proposals to implement a limitless TDM exception, removing any possibility of licensing content for TDM purposes and in effect allowing expropriation of copyrighted content, were reversed after concerted pushback from the UK creative community.

    It will not be easy for competing jurisdictions to hammer out a framework that will gain wide acceptance, particularly with the US, a global leader in innovation, being slow to the regulatory party. Developing nations with different governance models, like China, also need to be considered. China recently announced new measures to regulate generative AI and although the motivation is more to strengthen state control than to promote democratic values, according to the Carnegie Endowment for International Peace, “many of the regulation’s proposed measures map neatly onto the AI principles described in the consensus international AI policy instruments that are often sold as affirming democratic values, such as transparency, respect for individual rights, fairness, and privacy.” Perhaps that provides some hope for the achievement of an international agreement on use of AI, while allowing signatories a degree of flexibility in how the provisions are applied and interpreted. (After all, that flexibility is what allows many UN instruments to work today). What is clear is if there is to be an international agreement, while it may begin with “like-minded” states, eventually it will have to move beyond the usual suspects, especially to include countries where widescale AI development is likely to occur.

    International regulation and agreement on common standards will not be an easy process. But the fact that so many international forums and national governments are seized of the issue is indicative of the recognition that now is the time to get ahead of the problem. So far, governments are playing catch-up, and that poses plenty of challenges. While individual jurisdictions will still jockey for advantage, there seems to be a growing acceptance that a series of regulatory silos is not going to serve national or global interests. How quickly that will be translated into concrete action is the big question. At the end of the day, some form of international coordination, if not regulation, is essential.

    This article was first published Hugh Stephens Blog