MORE than any other industry, and almost without perceiving it, electronic security is now at the forefront of discussions about ethical AI and the opportunities and responsibilities of this technology are very great, indeed.
Not only does artificial intelligence have the power to enliven video streams and data inputs and outputs, our industry’s modest budgets mean security managers are always looking to enhance return on investment through operational drift. At the same time, there’s considerable risk for organisations whose solutions drift into perceptions of bias or breaches of privacy.
Challenging for everyone in the industry is the ubiquity of AI, which is present in almost every mildly complex solution. AI is in the app of a smarthome hub, the browser interface of a 4-port NVR, it’s installed in almost every CCTV camera and integral to key aspects of functionality in every VMS or SMS available.
Something that makes ethical AI harder is that there are few standards framing how ‘ethical’ is to be achieved. Asimov’s First Law – that a robot may not injure a human being or allow a human being to come to harm – helps, but organisations and individuals need their own ethical prism through which to view applications of artificial intelligence – from a home automation solution, to an integrated PSIM and everything in between.
Considerations of AI and robo ethics in security applications, include biases around race and gender introduced by skewed initial datasets that may lead to false positive matches or failed authentications. Ethics also covers privacy issues, and presses up against the fundamental questions of just how much automated systems need to know. Part of the industry’s issue moving forward is going to be deep learning, which works best when it’s fed oceans of data – the more data, the more accuracy. But at the same time, the more data, the more potential for intrusion.
As well as protecting individual privacy, another key element when it comes to applying AI solutions revolves around transparency, a quality that presupposes AI is explainable. And the wider discussion of ethical AI incorporates terms that tend to be contextual – words like ‘just’ and ‘fair’ and ‘responsible’. These labels are extremely difficult to quantify, particularly for security and law enforcement people thanks to a profound internal contradiction. Justly and responsibly protecting citizens is an excellent reason to monitor the thoughts of citizens, which breaches the general law of ‘harm no human’. Simultaneously, not monitoring the thoughts of citizens may mean failure to detect planning for a terrorist attack, breaching the general law of ‘harm no human’.
A question the electronic industry needs to keep front of mind is the level of decision-making AI is granted. There’s a singleness of purpose to AI and in many applications, human partnerships need to be adjacent so as to add context to the conclusions drawn from datasets. Human oversight needs to be engaged from the decision to use AI in a given application, to the nature of datasets AI is tasked with assessing, to evaluating the results of any AI process.
There needs to be a trail of accountability, too. Does accountability for the use of AI rest with the security integrator, the security operator, the security manager, senior management, or the board of directors? That accountability needs to be expressed in a site’s security procedures and reflected in the nature of AI applications that can be undertaken. Something that’s hard to believe is that accountability will never be applied by legislation. Government involvement in the scope of AI applications is a matter of when, not if.
A condition mentioned earlier that’s at once vital and impossible to achieve when it comes to AI is ‘transparency’. Deep learning operates by correlating patterns in vast numbers of datasets in ways not even creators can explain. It’s simply not possible to offer transparency of a system whose workings are based on logical processes that are opaque. This opaqueness makes it challenging to establish the source of AI errors or biases. How to do you investigate the cause of an event, or assign accountability for an event, when its cause cannot be explained or investigated? And can you trust the findings of an incoherent process when it comes to critical decisions regarding safety and security?
Regardless of the challenges of using AI ethically, this technology is so useful, so capable, that it would be wrong not to deploy it everywhere we could. Traditional electronic security solutions collect vast amounts of stateful data every day, from hundreds of millions of sensors and cameras – data that is never used to inform careful decision-making by security and operations management, data that goes entirely to waste. The ability of AI to liberate patterns in this data to enhance our safety and security is enormous. Our responsibility for AI’s ethical use in security applications is enormous, too.
#securityelectronicsandnetworks.com