Towards Trustworthy AI: Building Resilience Through Policy and Compliance

News

by | Jul 17, 2025

How can businesses in your jurisdiction adopt AI and automation responsibly, and what guidance are you offering to ensure regulatory compliance?

Machine Learning (“ML”) has created an interesting paradigm for firms looking to be more efficient, stay competitive and engineer efficiencies. And businesses may adopt Artificial Intelligence (“AI”) responsibly by having robust policies in place concerning the source of the AI, appropriate training and written policies with regard to use. More particularly, businesses should start with a basic written policy which creates the framework as to how their respective employees may use the AI with respect to their day-to-day operational needs. That policy will of course, need to be updated regularly as machine learning continues to evolve. And about the source of the AI itself, companies should proceed cautiously and thoroughly scrutinise all AI platforms before completely relying on a single source. Indeed, relying on 2 solid AI platforms is a best practice to ensure reliability of information. Also, companies in sensitive consumer facing industries (like healthcare and banking) should adhere to extra cautious policies which ensure compliance with all relevant laws and regulations. A working committee should also be formed which meets regularly to discuss implementation of AI at the firm. Like all things in technology, adaption is critical as machine learning continues to evolve.

In addition, firms should consider developing internal audit processes to evaluate how AI is being used across departments and whether its application aligns with regulatory and ethical guidelines. These audits can be both preventative and corrective, helping to identify potential compliance issues before they escalate. Embedding this kind of oversight into corporate governance will not only improve internal trust in AI use but also demonstrate to regulators that the company takes responsible AI deployment seriously.

What are the key risks of implementing AI, from data privacy to ethical concerns, and how can you help businesses in your jurisdiction navigate these complexities?

Depending on the business sector, the risks may be substantial. The data privacy concerns should be paramount to any business as the litigation exposure alone is significant. Understanding how the platform works, and how it stores data (for instance, does it have a data retention policy?) and when that data is deleted is critical. Another question businesses should ask: How is the data being used by the platform?

Moreover, with respect to the ethical concerns, businesses should tread carefully in ensuring that employees do not upload sensitive client data that may be confidential in many instances. Firms in the financial, legal and medical professions should be especially diligent with respect to client or patient information. Safeguards must be engineered to prevent uploading sensitive client or patient data to a third-party platform that may share that data with engineers or third parties. Also, firms should be cognisant of the inherent risks associated with asking any platform to draft operating policies, especially if those policies contain proprietary information or trade secrets.

Another emerging concern involves the use of biased or unvetted data sets, which can lead to discriminatory outcomes – intentionally or otherwise. Businesses must evaluate not just what AI can do, but what it should do, particularly in areas like recruitment, lending, or service delivery. A thorough bias audit or algorithmic impact assessment, ideally conducted with legal oversight, can be an effective way to flag hidden risks before they translate into reputational damage or regulatory action.

Are you seeing any trends in AI-driven disputes or liability concerns? How can firms assist clients in addressing potential AI-related litigation or regulatory scrutiny?

While it’s still in its infancy, we are seeing interesting disputes concerning copyright (authorship) and certain patent (inventorship) related issues. AI has also created new legal precedent in answering novel questions such as who is the author of a literary work of expression when its generated by a machine? These issues will continue to permeate as AI continues to be utilised in our society.

Firms may assist clients in addressing AI-related litigation and regulatory scrutiny by assisting those clients craft appropriate AI policies and practices. Moreover, firms can assist clients with training staff on using AI responsibly, and mitigating litigation or regulatory exposure.

Machine Learning is growing at a rapid pace and while both impressive and helpful, this technology has the potential for misuse. To wit, another disturbing trend is bad actors utilising AI to impersonate victims (both in identity and in voice) to gain access to sensitive information or to create AI “clones”.

As a precaution, businesses should be encouraged to adopt a proactive stance on liability allocation by embedding clear terms in their contracts with AI vendors and service providers. These agreements should address ownership of AI outputs, indemnification in case of misuse, and limitations on the platform’s access to sensitive data. Clarity on these fronts can reduce the risk of disputes and ensure a faster, more coordinated legal response should an issue arise.

Key Takeaways:

Responsible adoption of AI in the US requires businesses to develop written policies governing AI usage, carefully assess platform providers, and maintain internal oversight. Regular audits and the formation of cross-functional committees are essential to ensure compliance and ethical use across departments.
Businesses must safeguard sensitive client and operational data, particularly in regulated sectors such as finance, law, and healthcare. Companies should avoid uploading confidential information to third-party platforms and conduct bias assessments of AI tools to prevent discriminatory outcomes.
Early disputes are emerging around authorship, inventorship, and deepfake misuse. Firms can support clients by helping craft enforceable AI policies, offer legal oversight during adoption, and ensure contractual clarity on issues such as liability, ownership of outputs, and indemnities.

Author

  • Ismail Amin

    Ismail’s legal experience encompasses serving Fortune 500 companies, mid-sized privately held companies, and entrepreneurs. He presently serves as Corporate and Litigation Counsel to large and mid-sized businesses throughout California, Nevada, Texas, North Carolina, and New York as well as General and Personal Counsel to high-profile hospitality operators in California and Nevada. Ismail’s practice emphasizes Business and Intellectual Property matters, with a focus on healthcare, biopharmaceuticals, biotechnology, and hospitality. Ismail has counseled the firm’s healthcare provider clients in acquiring or selling assets while maximizing return and minimizing risk. He has helped clients acquire or sell over $1 billion worth of healthcare-related assets, including hospitals.

    View all posts