ChatGPT
· As
artificial intelligence technologies become omnipresent and their algorithms
more advanced—capable of performing a wide variety of tasks including voice assistance, recommending music,
driving cars, detecting cancer, and even deciding whether you get shortlisted
for a job— the risks and uncertainties associated with them have also
ballooned.
· Many
AI tools are essentially black boxes,
meaning even those who design them cannot explain what goes on inside them to
generate a particular output.Complex and unexplainable AI tools have already
manifested in wrongful arrests due to AI-enabled
facial recognition, discrimination and societal biases seeping into AI
outputs, and most recently, in how chatbots based on large language models (LLMs) like Generative Pretrained Trasformer-3
(GPT-3) and 4 can generate versatile, human-competitive and genuine looking
content, which may be inaccurate and use
copyrighted material created by others.
Q. What is the aim of this act?
· The
legislation was drafted in 2021 with the aim of bringing transparency, trust,
and accountability to AIand creating a framework to mitigate risks to the
safety, health, fundamental rights, and democratic values of the EU.
· It
also aims to address ethical questions and implementation challenges in various
sectors ranging from healthcare and education to finance and energy. The
legislation seeks to strike a balance
between promoting “the uptake of AI while mitigating or preventing harms
associated with certain uses of the technology”.
· AI
law aims to “strengthen Europe’s position as a global hub of excellence in AI
from the lab to the market” and ensure that AI in Europe respects the
27-country bloc’s values and rules. What does the Artificial Intelligence Act
entail?
Q. What is the definition of AI in this act ?
· The
Act broadly defines AI as “software that
is developed with one or more of the techniques that can, for a given set of
human-defined objectives, generate outputs such as content, predictions,
recommendations, or decisions influencing the environments they interact with”.
Under the definition, it identifies AI tools based on machine learning and deep
learning, knowledge and logic-based approaches and statistical approaches.
Q. What is the Classification of AI in this act ?
· The
Act’s central approach is the classification of AI tech based on the level of
risk they pose to the “health and safety or fundamental rights” of a person. There are four risk categories in the Act—
unacceptable, high, limited and minimal.
· The
Act prohibits using technologies in the unacceptable risk category with little
exception. These include the use of real-time facial and biometric
identification systems in public spaces; China-like systems of social scoring
of citizens by governments leading to “unjustified and disproportionate
detrimental treatment”; subliminal techniques to distort a person’s behaviour;
and technologies which can exploit vulnerabilities of the young or elderly, or
persons with disabilities.
Q. What are the focus areas in this act ?
· The
Act lays substantial focus on AI in the high-risk category, prescribing a
number of pre-and post-market requirements for developers and users of such
systems. Some systems falling under this category include biometric
identification and categorization of natural persons, AI used in healthcare,
education, employment (recruitment), law enforcement, justice delivery systems,
and tools that provide access to essential private and public services
(including access to financial services such as loan approval systems).
Q. What is the recent
proposal on General Purpose AI like ChatGPT?
· Lawmakers
now target the use of copyrighted
material by companies deploying generative AI tools such as OpenAI’sChatGPT or
image generator Midjourney, as these tools train themselves from large sets
of text and visual data on the internet. They will have to disclose any
copyrighted material used to develop their systems.
Q. Where does global
AI governance currently stand?
· The
U.S.does not currently have comprehensive AI regulation and has taken a fairly
hands-off approach. The Biden administration released a Blueprint for an AI
Bill of Rights (AIBoR). Developed by the White House Office of Science and
Technology Policy (OSTP), the AIBoR outlines a harms of AI to economic and
civil rights and lays down five principles for mitigating these harms. The
AIBoR has been described by the administration as a guidance or a handbook
rather than a binding legislation.
· On
the other end of the spectrum, China over
the last year came out with some of the world’s
first nationally binding regulations targeting specific types of algorithms and
AI. It enacted a law to regulate recommendation algorithms with a focus on
how they disseminate information. China’s Cyberspace Administration of China
(CAC), which drafted the rules, told companies to “promote positive energy”, to
not “endanger national security or the social public interest” and to “give an
explanation” when they harm the legitimate interests of users.Another piece of
legislation targets deep synthesis technology used to generate deepfakes.
Test Yourself:
Q26. With the present
state of development, Artificial Intelligence can effectively do which of the
following? (Prelims 2020)
1. Bring down electricity consumption in
industrial units 2. Create meaningful short stories and
songs 3. Disease diagnosis 4. Text-to-Speech Conversion 5. Wireless transmission of electrical
energy Select the correct answer
using the code given below:
(a) 1, 2, 3 and 5 only (b) 1, 3 and 4 only (c) 2, 4 and 5 only (d) 1, 2, 3, 4 and 5
Ans: (b) |