fbpx
24.6 C
Sydney
Wednesday, October 30, 2024

Buy now

  • HIKVISION NVR
  • HID Mobile Solutions
  • HIKVISION AX PRO
HomeSecurityAccess ControlEthical AI In The Security Industry

Ethical AI In The Security Industry

24.6 C
Sydney
26.7 C
Brisbane
27.1 C
Canberra
24.9 C
Melbourne

RECOMMENDED

WEATHER

Sydney
few clouds
24.6 ° C
27.6 °
22.4 °
58 %
9.3kmh
20 %
Wed
23 °
Thu
23 °
Fri
19 °
Sat
18 °
Sun
30 °

Latest Articles

STAY CONNECTED

2,462FansLike
1,476FollowersFollow
0FollowersFollow
Bookmark
Page is Bookmarked

MORE than any other industry, and almost without perceiving it, electronic security is now at the forefront of discussions about ethical AI and the opportunities and responsibilities of this technology are very great, indeed.

Not only does artificial intelligence have the power to enliven video streams and data inputs and outputs, our industry’s modest budgets mean security managers are always looking to enhance return on investment through operational drift. At the same time, there’s considerable risk for organisations whose solutions drift into perceptions of bias or breaches of privacy.

Challenging for everyone in the industry is the ubiquity of AI, which is present in almost every mildly complex solution. AI is in the app of a smarthome hub, the browser interface of a 4-port NVR, it’s installed in almost every CCTV camera and integral to key aspects of functionality in every VMS or SMS available.

Something that makes ethical AI harder is that there are few standards framing how ‘ethical’ is to be achieved. Asimov’s First Law – that a robot may not injure a human being or allow a human being to come to harm – helps, but organisations and individuals need their own ethical prism through which to view applications of artificial intelligence – from a home automation solution, to an integrated PSIM and everything in between.

Considerations of AI and robo ethics in security applications, include biases around race and gender introduced by skewed initial datasets that may lead to false positive matches or failed authentications. Ethics also covers privacy issues, and presses up against the fundamental questions of just how much automated systems need to know. Part of the industry’s issue moving forward is going to be deep learning, which works best when it’s fed oceans of data – the more data, the more accuracy. But at the same time, the more data, the more potential for intrusion.

As well as protecting individual privacy, another key element when it comes to applying AI solutions revolves around transparency, a quality that presupposes AI is explainable. And the wider discussion of ethical AI incorporates terms that tend to be contextual – words like ‘just’ and ‘fair’ and ‘responsible’. These labels are extremely difficult to quantify, particularly for security and law enforcement people thanks to a profound internal contradiction. Justly and responsibly protecting citizens is an excellent reason to monitor the thoughts of citizens, which breaches the general law of ‘harm no human’. Simultaneously, not monitoring the thoughts of citizens may mean failure to detect planning for a terrorist attack, breaching the general law of ‘harm no human’.

A question the electronic industry needs to keep front of mind is the level of decision-making AI is granted. There’s a singleness of purpose to AI and in many applications, human partnerships need to be adjacent so as to add context to the conclusions drawn from datasets. Human oversight needs to be engaged from the decision to use AI in a given application, to the nature of datasets AI is tasked with assessing, to evaluating the results of any AI process.

There needs to be a trail of accountability, too. Does accountability for the use of AI rest with the security integrator, the security operator, the security manager, senior management, or the board of directors? That accountability needs to be expressed in a site’s security procedures and reflected in the nature of AI applications that can be undertaken. Something that’s hard to believe is that accountability will never be applied by legislation. Government involvement in the scope of AI applications is a matter of when, not if.

A condition mentioned earlier that’s at once vital and impossible to achieve when it comes to AI is ‘transparency’. Deep learning operates by correlating patterns in vast numbers of datasets in ways not even creators can explain. It’s simply not possible to offer transparency of a system whose workings are based on logical processes that are opaque. This opaqueness makes it challenging to establish the source of AI errors or biases. How to do you investigate the cause of an event, or assign accountability for an event, when its cause cannot be explained or investigated? And can you trust the findings of an incoherent process when it comes to critical decisions regarding safety and security?

Regardless of the challenges of using AI ethically, this technology is so useful, so capable, that it would be wrong not to deploy it everywhere we could. Traditional electronic security solutions collect vast amounts of stateful data every day, from hundreds of millions of sensors and cameras – data that is never used to inform careful decision-making by security and operations management, data that goes entirely to waste. The ability of AI to liberate patterns in this data to enhance our safety and security is enormous. Our responsibility for AI’s ethical use in security applications is enormous, too.

#securityelectronicsandnetworks.com

AUTHOR

SEN News
SEN Newshttps://sen.news
Security & Electronics Networks - Leading the Security Industry with News and Latest Events. Providing information and pre-release updates on the latest tech and bringing it all to you daily. SEN News has been in print for over 20 years and has grown strong as a worldwide resource in digital media.

LEAVE A REPLY

Please enter your comment!
Please enter your name here