fbpx
23.9 C
Sydney
Wednesday, November 20, 2024

Buy now

  • HIKVISION AX PRO
  • HIKVISION NVR
  • HID SIGNO
23.9 C
Sydney

|

17.7 C
Canberra

|

25.3 C
Perth

|

18.8 C
Brisbane

|

22.1 C
Melbourne
HomeNewsMandatory Guardrails For High Risk AI

Mandatory Guardrails For High Risk AI

Mandatory Guardrails For High Risk AI – The Australian Federal Government has proffered 10 mandatory guardrails for AI in high risk applications that will impact security people.

Bookmark
Page Bookmarked

Mandatory Guardrails For High Risk AI Proposed By Australian Government.

Mandatory Guardrails For High Risk AI – The Australian Federal Government has proffered 10 mandatory guardrails for AI in high risk applications that will impact security people.

High-risk AI uses are considered those that may breach Australian human rights law, risk physical and mental health, or pose a danger to safety. Security AI solutions have multiple touch points with these guardrails.

It’s worth noting that the ‘mandatory’ AI standard being proposed is also voluntary, the idea being to encourage organisations to implement best practise while the government musters legislation to solidify standards as law.

What the guardrails ask is that developers and users of ‘high-risk AI’ take specific steps to ensure products are safe and ensure there’s an emphasis on testing, transparency and accountability.

Testing, transparency and accountability were obligations flagged by the government in January and includes labelling of AI systems, watermarking, and testing of products before and after release – this is nothing most security solutions are not well and truly across.

10 Mandatory Guardrails For High Risk AI

  • Guardrail 1: Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  • Guardrail 2: Establish and implement a risk management process to identify and mitigate risks
  • Guardrail 3: Protect AI systems and implement data governance measures to manage data quality and provenance.
  • Guardrail 4: Test AI models and systems to evaluate model performance and monitor the system once deployed.
  • Guardrail 5: Enable human control or intervention in an AI system to achieve meaningful human oversight.
  • Guardrail 6: Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  • Guardrail 7: Establish processes for people impacted by AI systems to challenge use or outcomes.
  • Guardrail 8: Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks.
  • Guardrail 9: Keep and maintain records to allow third parties to assess compliance with guardrails.
  • Guardrail 10: Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

More details on Guiding safe and responsible use of artificial intelligence in Australia are to be found here and you can read more interesting information from SEN news here.

“Mandatory Guardrails For High Risk AI Proposed By Australian Government.”

Bookmark
Page Bookmarked

AUTHOR

SEN News
SEN Newshttps://sen.news
Security & Electronics Networks - Leading the Security Industry with News and Latest Events. Providing information and pre-release updates on the latest tech and bringing it all to you daily. SEN News has been in print for over 20 years and has grown strong as a worldwide resource in digital media.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related Articles

RECOMMENDED

- Advertisement -

POLL

RECOMMENDED