Skip to content

In a recent statement, California's Attorney General indicates numerous activities carried out by AI companies could potentially breach the law.

In a recent internal document, it's suggested that Silicon Valley's most thriving industry could be predicated on illicit actions.

In a recent statement, California's Attorney General indicates numerous activities carried out by AI companies could potentially breach the law.

The legitimacy of AI industry's business practices has been a long-standing bone of contention. As a revolutionary technology, artificial intelligence has both offered significant benefits to society while simultaneously causing numerous problems. One of the primary issues with AI is its potential for misleading consumers, producing new forms of disinformation and propaganda, and discriminating against particular groups of people. Recently, California's Attorney General's office issued a legal memo emphasizing that these actions could be considered illegal.

On January 13th, California Attorney General Rob Bonta released two legal advisories, highlighting various areas where AI companies may be at risk of breaching the law. The memo encourages responsible AI use that prioritizes safety, ethics, and human dignity. It then goes on to outline potential legal pitfalls for AI companies, such as:

  • Utilizing AI to foster or advance deception. The rise of AI content generators has resulted in an increase in fake content on the internet, leading to concerns about deepfakes and disinformation. California's memo warns that companies using AI to create deepfakes, chatbots, or voice clones could be seen as deceptive and break state law.
  • Engaging in false advertising. There has been a trend of overzealous claims in the AI industry, with many companies exaggerating what their AI tools can accomplish. To avoid falling foul of California's false advertising law, companies should refrain from making misleading claims about AI capabilities.
  • Creating or selling an AI system or product that results in disproportionate impact on protected classes or perpetuates discrimination or segregation. AI systems have been shown to integrate human bias into their algorithms, leading to further concerns as they are now being used for housing and employment vetting. Companies using such systems should be aware of California's anti-discrimination laws.

Furthermore, Bonta's advisory includes recently passed regulations related to the AI industry. While the memo mentions that companies may be breaking the law, it seems to imply that self-regulation is advisable to avoid legal trouble.

However, the memo does not mention U.S. copyright law, another gray area where AI companies frequently run into trouble. Currently, OpenAI is being sued by the New York Times for allegedly breaching U.S. copyright law by using its articles to train its algorithms. The AI industry has encountered numerous lawsuits regarding this issue, but due to the ambiguous legal landscape, no successful verdicts have been reached yet.

In the future, tech companies should be cautious about using AI to create deepfakes, chatbots, or voice clones, as this could be perceived as deceptive and potentially break state law. To avoid legal issues, AI companies should also avoid making misleading claims about their AI capabilities, as this could be seen as false advertising.

As the AI industry continues to evolve, it's crucial for companies to address the potential impact of their AI systems on protected classes and avoid perpetuating discrimination. Failure to do so could lead to violations of California's anti-discrimination laws.

Read also:

    Latest