Cyber Risk Advisory

Everything Old is New Again: AI Compliance is a Privacy Problem

Joe Nelson

Joe Nelson

Senior Consultant, Coalfire (JD, AIGP, CIPP/US, CIPM)

July 29, 2025
Adobe Stock 216888218 web Opt 1080p

AI is surging, but not all of its accompanying governance problems are new.  In fact, most AI compliance issues are just the same old (and ongoing) privacy problems resurfacing under different names.  The good news?  Involving an empowered privacy team can prevent or resolve the vast majority of your AI governance problems.   

For context, data privacy laws, from the GDPR to the CCPA, generally align with the FIPPs or “Fair Information Practice Principles,” which date back to the 1970s.  These principles include providing notice/transparency, collecting personal data only for specified purposes, limiting use to those purposes, minimizing personal data collection, ensuring data quality, requiring security safeguards, accountability, and individual participation. 

They are also central to the responsible deployment of AI.  The accuracy and reliability of AI systems depends upon data quality; transparency into training data is required to understand model outputs and avoid bias or discrimination and to ensure training data was properly obtained; and models should be limited to their intended purpose(s) to avoid function creep. While not every potential AI governance issue has foundations in privacy, quite a few do.

Below, we show how several well-known AI mishaps are really privacy problems.  For each, we touch upon how a solid privacy program may have mitigated these AI problems.

Meta’s AI Chatbot

In 2025, Meta released its AI chatbot, advertising the bot as “more personal,” perhaps implying a degree of confidentiality or privacy.  It included a “Discover feed, a place to share and explore how others are using AI.” Users appeared to be largely unaware that hitting a “share” button published their text, audio exchanges, and images to the general public.  Such exchanges included legal and medical inquiries, dating advice, and more. The chatbot acknowledged (when queried) that users may inadvertently share sensitive information due to “misunderstandings about platform defaults,” among other things.   

AI governance addresses privacy concerns, from the use of personal or confidential data in model training through the generated outputs, considering their commensurate effect on humans. Both companies and individuals must understand how the AI system treats both queries and responses, with an eye towards potential leakage both in subsequent training, patching, and editing, and in the confidentiality (or lack thereof) of outputs. 

Similarly, under the FIPPs, privacy controls and consent processes must be transparent.  Meta claimed there were several steps taken before users could opt-in to sharing, but reports (and the model’s admissions) suggest public sharing may have been enabled by default and/or not adequately clarified to users. 

When addressing the transparency and ease-of-use of personal-data privacy controls, the maxim uttered by Robert DeNiro’s character in Ronin should apply: “Whenever there is any doubt, there is no doubt.”  In a privacy-centric AI environment, an empowered privacy program should be involved in product design, in effect advocating for the humans using the system.  The ability to learn how to query AI from others is a neat feature.  Privacy-centric design would ensure awareness at the time of sharing that personal, private information may be disclosed – removing any doubt.  It would ensure that settings are private-by-default and easy to locate (not buried in several menus). 

DoNotPay’s AI Robot Lawyer

In January 2025, the FTC finalized a consent order with DoNotPay, an AI-powered “robot lawyer,” which DoNotPay had claimed “operated like a human lawyer.”  It was designed help people fight traffic tickets or sue for assault, among other things.  DoNotPay’s chatbot solicited private, confidential information from individuals with few to no safeguards.  The FTC alleged that DoNotPay was, in actuality, little more than a generic generative-AI platform that encouraged extensive disclosure of private information without providing the promised results. 

In a traditional attorney-client relationship, the attorney-client privilege provides confidentiality to protect personal, confidential information obtained from individuals who are in difficult circumstances.  The FTC alleged that DoNotPay collected personal, confidential information under the guise of a confidential legal relationship, but where no confidentiality actually existed.  

DoNotPay illustrates where the notice/transparency principle affected both the disclosure of private information and the model’s capabilities.  Users were under the impression they were in a confidential, attorney-client relationship, where, in actuality, they were not.  The site’s Terms & Conditions as of April 2024 continue to muddy the water; they state that DoNotPay is not a law firm, but they still hedge on the privilege: “any communications between you and DoNotPay maynot [vs. will not] be protected under the attorney-client privilege doctrine”.

Transparency goes beyond disclaimers and privacy notices, and it should not leave a doubt.  Those deploying AI must be clear about the limitations of their models, such as hallucination rates, model creep, and (as in this case) whether the model actually has the specialized training and abilities that it claimed to have.  In DoNotPay, privacy problems were entwined with the other issues resulting in the FTC’s fine and twenty-year consent order.  A privacy program operating in accordance with the FIPPs would have encouraged  transparency and adequate notice, both about the privacy and confidentiality of their submissions and the limitations of the model.  This likely would have precluded many of their privacy and AI-governance problems.

Clearview AI's Facial Recognition

Clearview AI took internet scraping to another level when it scraped billions of facial photographs, creating a database which ties names to faces, and then offering its AI-powered facial-recognition product to the public and to law enforcement.  Clearview’s scraping/training was fundamentally different from ongoing copyright litigation because instead of training models on works protected under copyright law, Clearview trained its model on individuals’ faces.  

Facial recognition raises significant privacy concerns because it chills First Amendment participation, it cannot be easily changed like a password or credit-card number, and it cannot be left at home to avoid tracking (like a cell phone).  These are just problems when it works; the potential for misidentification, especially regard to skin tone (as with Rite Aid's AI a few years ago), raises an additional set of privacy harms. 

Clearview recently settled a class-action, which includes a prohibition on the sale of its product to the public.  The settlement did not include any deletion rights for class members, other than the statutory deletion rights available to residents of  CA, IL, CO, CT, IA, MT, UT, and VA.  Twenty-one attorneys general objected to the settlement (which is currently on appeal), alleging both inadequate notice to potential class members and insufficient relief (among other things).   

Clearview’s actions in building its AI system violated many of the long-standing privacy principles. A strong privacy program may have helped mitigate the risk both of the lawsuits and of regulatory action.  For example, 

  • Notice/Transparency:  Clearview did not notify data subjects or obtain their consent for its actions. The settlement did not resolve this (it rejected objections to continued use of photos).  A privacy program that implemented consent verification before the photos were obtained would have limited these claims and future state regulatory claims.
  • Purpose Specification + Use Limitation:  Images scraped by Clearview were not posted with the purpose of being included in a biometric facial-recognition system to be used by the general public or law enforcement.  A privacy program that ensured disclosure of the purpose + use of the photos so consent could be fully informed would have avoided these claims and future state regulatory actions.
  • Data Quality:  Indiscriminate scraping may include outdated, inaccurate, or irrelevant images, which, by selling the product to law enforcement, implicates potential misidentification.  A strong data-privacy program should include data quality standards, ensuring personal data is current and not inaccurate; while this, alone, would not address the majority of privacy problems with this system, it would reduce the potential harms arising out of misidentification.  
  • Individual Participation:  While only a handful of states mandate individual participation (data access rights), in the EU, Clearview reportedly obstructed individuals’ ability to exercise their access rights and failed to respond to access requests.  There are reports in the US that Clearview is unable to honor data-subject “opt out” requests under the California Consumer Privacy Act (CCPA).  A solid privacy program builds trust in an organization by allowing humans to participate in the data-collection process.  It would allow humans to understand what data Clearview has, whether it’s accurate, and to request deletion of this data.  Establishing a privacy program grounded in trust and individual participation would have built the ability to execute on these rights into the system.  
  • Accountability:  Tied to the prior FIPP, accountability affirmatively and transparently ensures compliance with the foregoing principles.  The settlement only awards a share of Clearview to class members; they did not agree to stop using or scraping images (and have continued), and thus take no accountability for their privacy violations, despite ongoing state regulatory investigations in the US.  An empowered privacy program proactively addresses privacy risk, providing accountability through active participation in product design, regular audits, and ensuring transparency & participation in the collection and use of personal data.  Privacy accountability here would have reduced the risk from past and ongoing lawsuits and regulatory actions. 

Despite the settlement, Clearview AI remains a potential target for state law enforcement by these (and other) attorneys general. 

* * *

While truly novel AI-governance issues certainly exist, the more public abuses to humans arise out of the lack of adherence to time-tested privacy principles.  An empowered privacy program applies these principles throughout the design and implementation of a product or service, from technical (privacy-by-design) and risk-management (privacy impact assessments) perspectives, among others.  

The privacy and AI-governance specialists in Coalfire's Cyber Risk Advisory work with organizations to ensure privacy programs fortify the development of responsible, compliant AI systems.  Get in touch to discuss how we can help you.