Q. WHAT’S THE
DEBATE?
· While there are
many different viewpoints on AI regulation, some advocate comprehensive
regulation or control while others argue that just partial regulation is
presently necessary. While some people agree on control, they still differ on
how much control should be imposed.
Q. WHAT SPARKED THIS
DEBATE?
· The Centre for
AI Safety (CAIS) recently came up with a brief statement aimed at sparking
conversation about potential existential threats posed by artificial
intelligence (AI). Mitigating the risk of extinction from AI should be a
global priority alongside other societal-scale risks such as pandemics and
nuclear war, according to the one-sentence statement. More than 350 AI CEOs,
academics, and engineers signed on to the statement. Top leaders from three of
the largest AI start-ups – Sam Altman, CEO of OpenAI, DemisHassabis, CEO of
Google DeepMind, and Dario Amodei, CEO of Anthropic – were present. The
announcement comes at a time when there is rising worry about the possible
dangers of artificial intelligence.
Q. IS ARTIFICIAL INTELLIGENCE A HUMAN-STEIN MONSTER?
“Artificial intelligence ‘hacked’ human civilization’s
operating system“
· According to
Sam Altman, CEO of the artificial intelligence company that developed ChatGPT,
government action will be vital in limiting the risks of increasingly powerful
AI systems. Altman recommended establishing a US or global organisation that
have the authority to licence or de-license the most powerful AI systems and
have the authority to take that licence away and ensure compliance with safety
standards.
· He advocated
that a new regulatory agency enforce protections to prevent AI models from ‘self-replicating
and self-exfiltrating into the wild’, implying future fears about powerful
AI systems that could lure humans into losing control.
· Yuval Noah
Harari, the Israeli philosopher and author of Homo Deus and Sapiens: A Brief
History of Mankind, recently argued in a leading publication that artificial
intelligence ‘hacked’ human civilization’s operating system. He claims that
humanity has been plagued with AI dread since the dawn of the computer age. He
discussed how AI could affect culture, taking up the case of language, which
is fundamental to human culture.
· Yuval Noah
Harari went on to say that the rise of artificial intelligence is having a tremendous
impact on society, influencing different facets of economics, politics,
culture, and psychology.Harari has discussed how AI could build close ties
with humans and influence their judgements. ‘Through its mastery of language,
AI could even form intimate relationships with people and use the power of
intimacy to change our opinions and worldviews,’ he writes. He used the case of
Blake Lemoine, a Google worker who was fired after publicly stating that the AI
chatbotLaMDA had become sentient.
· Prof. Gary
Marcus, an AI specialist, pointed out that tools like chatbots might subtly
affect people’s beliefs far more than social media. Companies that select
which data gets into their large language models (LLMs) have the potential to
alter civilizations in subtle and significant ways. According to a larger Pew
Research Centre research, AI will have a significant impact on jobs over the
next 20 years.
· Another barrier
to AI adoption is that AI systems obtain their data from unrepresentative
sources.
· If the data is
distorted, the AI will acquire it and may even exaggerate the bias. According
to the Harvard Business Review, “AI systems that produce biased results have
been making headlines.” Apple’s credit card algorithm, for example, has been
accused of discriminating against women, prompting an investigation by
New York’s Department of Financial Services. The issue of controlling AI
appears in many other forms, such as pervasive online advertisement algorithms
that may target viewers based on ethnicity, religion, or gender.”
· According to a
recent study published in Science, risk prediction systems used in health care,
which affect millions of people in the United States each year, reveal significant
racial prejudice. Another study, published in the Journal of General
Internal Medicine, discovered that the software utilised by top hospitals to
prioritise kidney transplant recipients was biased towards black patients.
In theory, it may be able to programme some notion of fairness into the
software, mandating that all outcomes meet specific criteria. Amazon, for
example, is experimenting with a fairness statistic known as conditional
demographic disparity, and other companies are working on similar criteria.
Q. CAN ARTIFICIAL INTELLIGENCE REPLACE HUMAN INTELLIGENCE?
‘AI is not intelligence and idea that AI will replace human intelligence is unlikely’
· AI is not
intelligence, it is prediction, according to the World Economic Forum. “We’ve
noticed an increase in the machine’s capacity to accurately forecast and
execute a desired outcome with huge language models. However, equating this
with human intelligence would be a mistake. This is evident when looking at
machine learning systems, which, for the most part, can still only
accomplish one task very well at a time. This is not common sense, and it
is not equal to human levels of thinking that allow for easy multitasking.
Humans can absorb information from one source and apply it in a variety of
ways. In other words, human intelligence is transferable, but machine
intelligence is not”.
BENEFITS OF AI:
· According to
the World Economic Forum, “AI has enormous potential to do good in a variety of
sectors, including education, healthcare, and climate change mitigation.’
FireAId, for example, is an AI-powered computer system that predicts the
possibility of forest fires based on seasonal variables using wildfire risk
maps. It also assesses wildfire danger and intensity to aid in resource
allocation. AI is applied in healthcare to improve patient care through more
personalised and effective prevention, diagnosis, and treatment.
Healthcare expenditures are being reduced as a result of increased
efficiencies. Furthermore, AI is poised to substantially alter — and
presumably improve — elder care.
· The performance of over 5,000 customer care employees, discovered that workers were 14% more productive when utilising generative AI tools. According to the study, pairing workers with an AI assistant was far more beneficial with rookie and low-skilled personnel. The influence of technology on highly skilled personnel was negligible.
Q. REGULATING AI?
· Throughout
history, governments have regulated emerging technologies with varying
degrees of success. Automobile regulation, railway technology regulation,
and telegraph and telephone regulation are a few examples. AI systems, like
these other technology, are employed by humans as tools. The societal impact of
AI systems is largely determined not by the complicated code that underpins
them, but by who uses them, for what goals, and on whom they are used. And all
of these things are controllable. The successful regulation of new technology
in the past suggests that we should concentrate on its impacts and
applications.
· We don’t have
much hard proof that unaligned AI poses a serious risk to humanity, other than
speculation about how robots will take over the globe or computers will
transform the earth into a gigantic paperclip. There may be little, if any,
benefits to regulation if there is little or no evidence of a problem. Another
reason to be wary of regulation is the cost. AI is a new technology that is
still in its infancy. Because we still don’t fully understand how AI works,
attempts to regulate it might quickly backfire, restricting innovation and
impeding development in this fast evolving sector. Any laws that are enacted
are likely to be adapted to existing practises and players. That makes no sense
when it is unclear which AI technologies will be the most successful or which
AI players will dominate the business.
Q. WHAT IS INDIA’S RESPONSE TO DEMANDS FOR
AI REGULATION?
· Rajeev
Chandrasekhar, Minister of State, Ministry of Electronics and Information
Technology, stated after assuming the Chair of the Global Partnership on
Artificial Intelligence (GPAI), an international initiative to support
responsible and human-centric development and use of artificial intelligence
(AI), “We will work in close cooperation with member states to put in place
a framework around which the power of AI can be exploited for the good of
citizens and consumers.”
· The Minister
stated that India is constructing an ecosystem of modern cyber laws and
regulations driven by three boundary conditions of openness, safety,
trust, and accountability, emphasising that AI is a dynamic facilitator for
moving forward existing investments in technology and innovation.
· With a National
AI Programme in place, a National Data Governance Framework Policy
in place, and one of the world’s largest publicly accessible data sets in the
works, the Minister reaffirmed India’s commitment to the efficient use of AI
for catalysing an innovation ecosystem around AI capable of producing good,
trusted applications for our citizens and the world at large.
· The Minister of
State for Information Technology and Electronics, who is overseeing a mammoth
operation involving extensive engagement with stakeholders to frame the draft Digital
India Act, which will replace the two-decade-old IT Act, stated that India
has its own ideas on “guardrails” that are required in the digital world.
· Union Minister
Rajeev Chandrasekhar stated that India will do “what is right” to protect its
digital nagriks and keep the internet safe and trusted for its users in the
upcoming Digital India framework, which will include a chapter devoted to
emerging technologies, particularly artificial intelligence, and how to
regulate them through the “prism of user harm”.