The federal government must move urgently to regulate artificial intelligence, says a top AI pioneer, warning the technology㽶Ƶֱs current trajectory poses major societal risks.
Yoshua Bengio, dubbed a 㽶Ƶֱgodfather㽶Ƶֱ of AI, told members of Parliament Monday that Ottawa should put a law in place immediately, even if that legislation is not perfect.
The scientific director at Mila, the Quebec AI Institute, says a 㽶Ƶֱsuperhuman㽶Ƶֱ intelligence that is as smart as a human being could be developed within the next two decades 㽶Ƶֱ or even the next few years.
㽶ƵֱWe㽶Ƶֱre not ready,㽶Ƶֱ Bengio said.
One short-term risk of AI is the use of deepfake videos to spread disinformation, he said. They use AI to make it look as though a public figure is saying something they didn㽶Ƶֱt, or doing something that never happened.
The technology can also be used to interact with people through text or dialogue 㽶Ƶֱin a way that can fool a social-media user and make them change their mind on political questions,㽶Ƶֱ said Bengio.
㽶ƵֱThere㽶Ƶֱs real concern about use of AI in politically oriented ways that go against the principles of our democracy.㽶Ƶֱ
A year or two down the road, the worry is that more-advanced systems can be used for cyberattacks.
AI systems are getting better and better at programming.
㽶ƵֱWhen these systems get strong enough to defeat our current cyber defences and our industrial digital infrastructure, we are in trouble,㽶Ƶֱ Bengio said.
㽶ƵֱEspecially if these systems fall in the wrong hands.㽶Ƶֱ
The House of Commons industry committee where Bengio testified Monday is studying a Liberal government bill that would update privacy law and begin regulating some artificial intelligence systems.
The bill as it㽶Ƶֱs drafted would give the government time to develop regulations, but Bengio says some provisions should take effect right away.
㽶ƵֱWith the current approach, it would take something like two years before enforcement (is) possible,㽶Ƶֱ he said.
One of the initial rules he said he wants to see implemented is a registry that would require systems with a specified level of capability to report to the government.
Bengio said that would put the responsibility and cost of demonstrating safety on large tech companies developing these systems, rather than on taxpayers.
Bill C-27 was first drafted in 2022 to target what are described as 㽶Ƶֱhigh-impact㽶Ƶֱ AI systems.
Bengio said the government should change the legislation㽶Ƶֱs definition of 㽶Ƶֱhigh-impact㽶Ƶֱ to include technology that poses national security and societal threats.
That could include any AI systems that bad actors could use to design dangerous cyberattacks and weapons, or systems that find ways to self-replicate despite programming instructions to the contrary.
Generative AI systems like ChatGPT, which can create text, images and videos, emerged for widespread public use after the bill was first introduced.
The government says it plans to amend the legislation to reflect that.
Liberals say they aim to require companies beyond such systems to take steps ensuring the content they create is identifiable as AI-generated.
Bengio said it㽶Ƶֱs 㽶Ƶֱvery important to cover general-purpose AI systems because they㽶Ƶֱre also the ones that could be the most dangerous if misused.㽶Ƶֱ
Catherine Régis, a professor at the Université de Montréal, also said at the committee meeting Monday that the government needs to act urgently, citing recent 㽶Ƶֱmeteoric developments in AI, which we㽶Ƶֱre all familiar with.㽶Ƶֱ
Speaking in French, she pointed out that AI regulation is a global effort, and Canada must figure out what to do at the national level if it wants to have a voice.
㽶ƵֱDecisions will be taken on a global scale that㽶Ƶֱll have an impact on all countries,㽶Ƶֱ she said.
Establishing a clear and solid vision at the Canadian level is 㽶Ƶֱone of the essential conditions to play a credible structuring and influential role in global governance.㽶Ƶֱ
READ ALSO:
READ ALSO: