Facial Recognition Regulation Model Law Proposed
Facial Recognition Regulation Model Law Proposed – A new report from the University of Technology Sydney (UTS) Human Technology Institute outlines a model law for facial recognition technology to protect against harmful use of the technology, while fostering innovation for public benefit.
Facial Recognition Technology: Towards a Model Law has been co-authored by Professor Nicholas Davis, Professor Edward Santow, and Lauren Perry of the Human Technology Institute, UTS, and recommends reform to modernise Australian law and to address threats to Australians’ privacy and other human rights.
This report calls on Federal Attorney-General Mark Dreyfus to lead a national facial recognition reform process, which it says should start by introducing a bill into the Australian Parliament based on the model law set out in the report.
The report also recommends assigning regulatory responsibility to the Office of the Australian Information Commissioner to regulate the development and use of this technology in the federal jurisdiction, with a harmonised approach in state and territory jurisdictions.
The model law sets out 3 levels of risk to human rights for individuals affected by the use of a particular facial recognition technology application, as well as risks to the broader community.
Under the model law, anyone who develops or deploys facial recognition technology must first assess the level of human rights risk that would apply to their application. That assessment can then be challenged by members of the public and the regulator. Based on the risk assessment, the model law then sets out a cumulative set of legal requirements, restrictions and prohibitions.
In June 2022, a CHOICE investigation revealed that several large Australian retailers including Bunnings were using facial recognition to identify customers entering their stores, leading to calls for improved regulation. There have also been widespread calls for reform of facial recognition law – in Australia and internationally.
This new report responds to those calls. It recognises that our faces are special, in the sense that humans rely heavily on each other’s faces to identify and interact. This reliance leaves us particularly vulnerable to human rights restrictions when this technology is misused or overused.
“When facial recognition applications are designed and regulated well, there can be real benefits, helping to identify people efficiently and at scale,” said Professor Santow, the former Australian Human Rights Commissioner and now Co-Director of the Human Technology Institute. “The technology is widely used by people who are blind or have a vision impairment, making the world more accessible for those groups.
“This report proposes a risk-based model law for facial recognition. The starting point should be to ensure that facial recognition is developed and used in ways that uphold people’s basic human rights,” he said.
According to report co-author, Professor Nicholas Davis, gaps in current law have created a kind of regulatory market failure.
“Many respected companies have pulled back from offering facial recognition because consumers aren’t properly protected,” said Professor Davis, a former member of the executive committee at the World Economic Forum in Geneva and Co-Director of the Human Technology Institute.
“Those companies still offering in this area are not required to focus on the basic rights of people affected by this tech,” he said. “Many civil society organisations, government and inter-governmental bodies and independent experts have sounded the alarm about dangers associated with current and predicted uses of facial recognition.”
Responding to the report, Corsight chief privacy officer, Tony Porter, said recent proposals made to the Australian Government to introduce a model law to better regulate the growing application of facial recognition technology were welcome.
“Developers of face recognbition, users, regulators and the public may all derive a greater degree of assurance and confidence where the use of AI in society is regulated by clear and unambiguous rules which direct how such technologies may be used and how such use will be held to account,” Porter said.
“Corsight is involved in much of the heavy lifting in terms of developing these standards. We recognise that law and regulation inevitably lag technological developments and aim to support our clients in navigating those complexities. Our specialists – a former surveillance regulator, surveillance specialists and world class data technicians – provide bespoke advice to clients and partners around the world on lawful and compliant use.
“We look forward to supporting the Australian Government, standards bodies and regulators in assessing these proposals and developing the compliance framework to ensure face recognition technology continues to be used as a force for good.”
You can read the full report here.
#SEN #SENnews #security #electronics