· The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023. Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).
· Issues like the war in Ukraine, economic security, supply chain disruptions, regulating AI and nuclear disarmament were discussed.
·
To establish the Hiroshima AI process, through a G7 working group, in an
inclusive manner and in cooperation with the OECD and GPAI, for discussions on
generative AI by the end of this year.These discussions could include topics
such as governance, safeguard of
intellectual property rights including copyrights, promotion of transparency,
response to foreign information manipulation, including disinformation, and
responsible utilisation of these technologies.”
·
The HAP is likely to conclude by December 2023. The first meeting under this
process was held on May 30.
Q. Why is Hiroshima AI process important?
·
It will derive its
guiding principles to govern AI.
· AI development and
implementation must be aligned with values such as freedom, democracy, and
human rights. Values need to be linked to principles that drive regulation. To
this end, the communiqué also stresses fairness, accountability, transparency,
and safety.
· The communiqué also
spoke of “the importance of procedures that advance transparency, openness, and
fair processes” for developing responsible AI. “Openness” and “fair processes”
can be interpreted in different ways, and the exact meaning of the “procedures
that advance them” is not clear.
Q. What does the
process entail?
· An emphasis on freedom, democracy, and human rights, and mentions of
“multi-stakeholder international organisations” and “multi-stakeholder
processes” indicate that the HAP isn’t expected to address AI regulation from a
State-centric perspective. Instead, it exists to account for the importance of
involving multiple stakeholders in various processes and to ensure the latter
are fair and transparent.
· The task before the HAP is really challenging considering the divergence
among G7 countries in, among other things, regulating risks arising out of
applying AI. It can help these countries develop a common understanding on some
key regulatory issues while ensuring that any disagreement doesn’t result in
complete discord.
Q. Is there an example
of how the process can help?
· The matter of intellectual property rights (IPR) offers an example of
how the HAP can help. Here, the question is whether training a generative AI
model, like ChatGPT, on copyrighted material constitutes a copyright violation.
While IPR in the context of AI finds mention in the communiqué, the
relationship between AI and IPR and in different jurisdictions is not clear.
There have been several conflicting interpretations and judicial
pronouncements.
· The HAP can help the G7 countries move towards a consensus on this issue
by specifying guiding rules and principles related to AI and IPR. For example,
the process can bring greater clarity to the role and scope of the ‘fair use’
doctrine in the use of AI for various purposes.
· Generally, the ‘fair use’ exception is invoked to allow activities like
teaching, research, and criticism to continue without seeking the
copyright-owner’s permission to use their material. Whether use of copyrighted
materials in datasets for machine learning is fair use is a controversial
issue.
· As an example, the HAP can develop a common guideline for G7 countries
that permits the use of copyrighted materials in datasets for machine-learning
as ‘fair use’, subject to some conditions. It can also differentiate use for
machine-learning per se from other AI-related uses of copyrighted
materials.
· This in turn could affect the global discourse and practice on this issue.
· Overall, the establishment of the HAP makes one thing clear: AI governance has become a truly global issue that is likely to only become more contested in future.