The growing importance of artificial intelligence (AI) is creating novel and interesting challenges for California businesses and out-of-state companies that do business in California. Moreover, as some of the most important technology companies in the world are based in California, the state is at the forefront of many legal issues related to AI. Here are some of the most notable AI legal issues in California that businesses should consider:

California has Numerous Law Relating to Privacy with May be Implicated by AI

Privacy, particularly of consumers, is one of the most significant legal issues related to AI in California. The California Consumer Privacy Act (CCPA), which was enacted in 2018, provides consumers with certain rights regarding the collection, use, and sharing of their personal information. The CCPA was amended by the California Privacy Rights Act (CPRA), which was passed in November 2020 and basically went into effect on January 1, 2023. The CCPA provides very strong consumer protection requirements and applies to companies that either: (a) have gross annual revenue of over $25 million; (2) buy, receive, or sell the personal information of 100,000 or more California residents, households, or devices; or (3) derive 50% or more of their annual revenue from selling California residents’ personal information. Companies that are subject to the CCPA which also use AI technologies to collect and process personal information must comply with the CCPA’s requirements. This helps to ensure that collection of personal data is being used for benefit of the consumers providing the personal data and not for the benefit of third parties who want to acquire and use the information for their own purposes. Therefore, it is important that any companies that use AI models which use personal data know the provenance of that data. Additionally, the CCPA covers inferences based on personal data, and if a company creates a profile for a consumer based on connecting a group of data points, this will also be covered by the CCPA and will need to be available to the consumer. Therefore, it is important for businesses to understand where data used in AI models are coming from and the way in which it is being used.

Business Should Consider how to Contractually Address Products Liability Issues Relating to AI

Another issue that companies should consider is what liability they may have for harm and damages caused by AI. At this point, the answer is not entirely clear, but there are many important questions to consider, such as: Who should be responsible if an AI algorithm makes a decision that causes harm? How should the fault be identified and apportioned in such cases? What sort of remedy does someone negatively impacted by an AI algorithm have? There is also the question of whether an AI system should be treated as a product or a service, and this issue has not been clearly determined. Moreover, it can be difficult to determine the difference between damages resulting from AI’s “free will” versus damages from an actual defect in the product. California law recognizes the concept of strict liability, which holds manufacturers and sellers responsible for harm caused by their products, even if they were not negligent. However, the application of strict liability to AI systems is still an open question and some argue that a negligence standard should be applied to AI since it is essentially acting in the capacity of a human. California lawmakers are considering legislation to clarify product liability issues related to AI.

As a result, it is a good idea for businesses that use AI to consider addressing product liability arising from the use of AI, from a contractual perspective, in order to try to have a framework for handling and apportioning risk relating to the use of AI.

Companies Must be Careful that AI Does Not Engage in Bias and Produce Discriminatory Outcomes

One specific type of liability companies may face relating to the use of AI is a liability for bias and discrimination created by AI. California law prohibits discrimination based on certain protected characteristics, including, but not limited to, race, gender, age, and sexual orientation. However, it is possible that AI systems may inadvertently perpetuate or amplify existing biases, leading to discriminatory outcomes. AI is technically a subfield of computer science that aims to build systems capable of gathering data and using that data to make decisions and solve problems, although the term if often used imprecisely to refer to various types of automation. There are two basic types of AI, “simple” or non-machine learning AI, and machine learning AI. Simple AI requires human intervention at every step and so it is relatively easy to identify and fix bias. However, it is harder to do so with machine-learning AI because it is not always possible to determine how a machine-learning AI system made a decision.

California lawmakers have introduced several bills aimed at addressing bias and discrimination in relation to AI, including AB-13, which would require certain companies to disclose the use of an automated decision-making system, which is any computational process, including one derived from machine learning, statistics, or other data-processing or artificial-intelligence techniques, that makes a decision or facilitates human decision making that impacts persons, and to provide an explanation of how they work and what their impact is. The law would apply to any digital or software company that uses an automated decision system. Companies subject to AB-13 could be subject to penalties for failing to comply with the law’s requirements if it goes into effect.

The California Fair Employment and Housing Council has also proposed modifications to its employment anti-discrimination rules which would impose liability on companies that use AI tools that have a discriminatory impact. The proposed rules would make it “unlawful for an employer or a covered entity to use qualification standards, employment tests, automated-decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees on the basis of a

Member Login

, unless the standards, tests, or other selection criteria, as used by the covered entity, are shown to be job-related for the position in question and are consistent with business necessity.”

However, regardless of whether these proposed regulations go into effect, companies that use AI should be proactive about identifying and eliminating bias in algorithms and attempting to ensure that algorithms do not produce discriminatory outcomes. Companies should also conduct proper due diligence when they acquire tools from a vendor which uses AI.

Businesses Should Consider Issues Relating to Ownership of Intellectual Property Created by AI

There is also the interesting question of who owns intellectual property created by AI technologies. For instance, the question of who owns the rights to the data used to train an AI system, which relates to the discussion of privacy concerns regarding personal data above. There are also legal issues that arise from the use by AI of intellectual property that third parties own. For instance, AI art generators scrape vast numbers of images, many of which are copyrighted, from the internet, to teach the system how to create images. Numerous artists have argued that this is an infringement on their copyrights on the art they have created. There is also the question of who owns the output generated by an AI system. Currently, the United States Copyright Office will not register a work created entirely by AI, since a copyrighted work must have human authorship. However, in terms of ownership, generally, the person who used AI to create something is generally considered the owner of the resulting product. AI itself is considered a tool and cannot own anything. Although, developers of AI systems might possibly reserve ownership of works created by their AI system to themselves contractually. California law provides strong protections for intellectual property rights, and companies that use AI technologies must navigate these legal frameworks carefully.

In conclusion, California is grappling with various legal and ethical issues related to AI, including privacy, product liability, bias and discrimination, and intellectual property. As AI technologies continue to advance, some of these questions will be resolved by the legislature and the Courts, but it is likely that these issues will also become increasingly complex and challenging to address. Stay tuned for more developments.

Please do not hesitate to reach out to TALG if you have any questions about legal issues relating to artificial intelligence.