New Guide Empowers Australian Tech Workers to Expose Corporate Wrongdoings

New Guide Empowers Australian Tech Workers to Expose Corporate Wrongdoings

2 minutes, 49 seconds Read

The Human Rights Law Centre has released a new guide that empowers Australian tech employees to speak out against harmful company practices or products.

The guide, Technology-Related Whistleblowing, provides a summary of legally protected avenues for raising concerns about the harmful impacts of technology, as well as practical considerations.

SEE: ‘Right to Disconnect’ Laws Push Employers to Rethink Tech Use for Work-Life Balance

“We’ve heard a lot this year about the harmful conduct of tech-enabled companies, and there is undoubtedly more to come out,” Alice Dawkins, Reset Tech Australia executive director, said in a statement. Reset Tech Australia is a co-author of the report.

She added: “We know it will take time to progress comprehensive protections for Australians for digital harms –  it’s especially urgent to open up the gate for public accountability via whistleblowing.”

Potential harms of technology an area of focus in the Australian market

Australia has experienced relatively little tech-related whistleblowing. In fact, Kieran Pender, the Human Rights Law Centre’s associate legal director, said, “the tech whistleblowing wave hasn’t yet made its way to Australia.”

However, the potential harms involved in technologies and platforms have been in the spotlight due to new laws by the Australian government and various technology-related scandals and media coverage.

Australia’s ban on social media for under 16s

Australia has legislated a ban on social media for citizens under 16, coming into force in late 2025. The ban, spurred by questions about the mental health impacts of social media on young people, will require platforms like Snapchat, TikTok, Facebook, Instagram, and Reddit to verify user ages.

A ‘digital duty of care’ for technology companies

Australia is in the process of legislating a “digital duty of care” following a review of its Online Safety Act 2021. The new measure requires tech companies to proactively keep Australians safe and better prevent online harms. It follows a similar legislative approach to the U.K. and European Union versions.

Bad automation in tax Robodebt scandal

Technology-assisted automation in the form of taxpayer data matching and income-averaging calculations resulted in 470,000 wrongly issued tax debts being pursued by the Australian Taxation Office. The so-called Robodebt scheme was found to be illegal and resulted in a full Royal Commission investigation.

AI data usage and impact on Australian jobs

An Australian Senate Select Committee recently recommended establishing an AI law to govern AI companies. OpenAI, Meta, and Google LLMs would be classified as “high-risk” under the new law.

Much of the concerns involved the potential use of copyrighted material in AI model training data without permission and the impact on the livelihoods of creators and other workers due to AI. A recent OpenAI whistleblower shared some concerns in the U.S.

Consent an issue in AI model health data

The Technology-Related Whistleblowing guide points to reports that an Australian radiology company handed over medical scans of patients without their knowledge or consent for a healthcare AI start-up to use the scans to train AI models.

Photos of Australian kids used by AI models

Analysis by Human Rights Watch found that LAION-5B, a data set used to train some popular AI tools by scraping internet data, contains links to identifiable photos of Australian children. Children or their families gave no consent.

Payout after Facebook Cambridge Analytica scandal

The Office of the Australian Information Commissioner recently approved a $50 million settlement from Meta following allegatio

Read More

Similar Posts