Skip to content

Controversy erupting over facial recognition software in Australia

Retail colossus in Australia found to be breaking the law by employing facial recognition CCTV for detecting false returns, fueling global conversation about the technology's legitimacy.

Controversy over facial recognition technology in Australia
Controversy over facial recognition technology in Australia

Controversy erupting over facial recognition software in Australia

In a significant ruling, the Office of the Australian Information Commissioner (OAIC) has declared that Kmart Australia's use of facial recognition technology (FRT) to collect customers' personal and sensitive information is unlawful. This decision follows a three-year investigation into retail chains' use of FRT and comes a year after fellow Wesfarmers subsidiary Bunnings was also found to have violated the Privacy Act.

The OAIC found that Kmart failed to notify shoppers or collect consent for their biometric information to be collected by the FRT. Financial penalties were not imposed on Kmart, but the OAIC reminded businesses to keep 'privacy considerations' at the core of their decisions when considering the deployment of new technologies.

Across the globe, the use of FRT is a subject of intense scrutiny and debate. In New York, a higher concentration of facial recognition compatible CCTV cameras has been found in areas with a higher proportion of non-white residents, according to Amnesty International's report. This raises concerns about potential biases and privacy violations.

In Europe, several companies operating face recognition services in public spaces, including those offering public facial search engines, are under scrutiny due to concerns about privacy violations and non-compliance with GDPR and the EU AI Act. However, enforcement remains challenging, and at least four providers openly advertise their paid services that identify strangers from snapshots online.

The EU's AI Act prohibits AI applications in real-time and remote biometric identification systems, including the deployment of FRT in public spaces, but exceptions remain for law enforcement purposes. The London Metropolitan Police's use of FRT at protests and large gatherings is under intense scrutiny in the UK, with the Equality and Human Rights Commission (EHRC) expressing concern that the Met's current policy on FRT use is incompatible with the European Convention on Human Rights.

The use of FRT by the London Metropolitan Police is also subject to an impending judicial review regarding compliance with human rights law. Human Rights Watch reports that FRT systems have been used in Iran to track women failing to adhere to its hijab law, in China to target people based on ethnicity, and in Russia to restrain political dissent.

In the US, there are currently no federal laws expressly regulating the use of FRT, but several states have moved to restrict mass biometric data collection through such technologies. Rochelle Garza, chair of the US Commission on Civil Rights, warned that the unregulated use of facial recognition technology poses significant risks to civil rights, especially for marginalized groups.

The Ada Lovelace Institute in the UK released a report calling for a new, risk-based legislation for FRT and criticizing the UK's governance approach. The OAIC added that the deployment of the FRT system for the prevention of fraud was 'of limited utility' and less privacy-intrusive alternatives were available to Kmart.

In conclusion, the use of facial recognition technology is a complex issue with significant implications for privacy, civil rights, and human rights. As technology continues to evolve, it is crucial that governments and businesses prioritize transparency, consent, and the protection of individual rights in their deployment of such technologies.

Read also:

Latest