Welcome to this month’s issue of The BR Privacy & Security Download, the digital newsletter of Blank Rome’s Privacy, Security, & Data Protection practice. We invite you to share this resource with your colleagues and visit Blank Rome’s Privacy, Security, & Data Protection webpage for more information about our team.
STATE & LOCAL LAWS & REGULATION
California Enacts Series of AI Laws: California has enacted a series of laws designed to address the ethical concerns surrounding artificial intelligence (“AI”) and protect individuals from the misuse of digital content. SB 942 (the “California AI Transparency Act”) requires developers of widely used AI systems to provide certain AI-detection tools and watermarking capabilities to help identify AI-generated content. AB 2013 requires developers of generative AI systems that are made publicly available to Californians to publish a high-level summary of the datasets used to train such systems. AB 2602 requires contracts that permit the creation or use of a digital replica to include a description of how the digital replica will be used. AB 1836 grants estates the right to control and protect the use of a deceased person’s digital replica for up to 70 years after their death. SB 926 makes it a crime to create and distribute sexually explicit images of a real person that appear authentic when intended to cause that person serious emotional distress. SB 981 requires social media platforms to establish a mechanism for users to report sexually explicit deepfakes of themselves.
Colorado AG Discusses AI Regulations: Colorado Attorney General Phil Weiser spoke at the Silicon Flatirons Conference on Privacy at the State Level, including AI regulation. In this speech, Attorney General Weiser outlined his plans for making Colorado a “model” of how to “use data and AI for good” as well as how to “protect consumers by adopting appropriate practices and guardrails that ensure that the use of data and AI does not create material risks or harms to consumers.” He identified a “true north,” or core guiding principle, for both state privacy regulation and AI regulation – “to be a state that protects consumers, welcomes entrepreneurs, and encourages innovation.” He identified the keys to achieving this goal as (1) appropriately defining AI; (2) determining whether AI-specific regulations are appropriate, or whether general applicability laws may suffice; (3) not merely adopting regulations recommended by industry leaders that may stifle competition without providing desired outcomes; and (4) ensuring that any regulation implemented does not stifle innovation and economic growth. He emphasized that any regulation must take a risk-based approach that follows the harm principle, ensuring that the harms of the new technology do not exceed the expected benefits. Finally, he reiterated Colorado’s stated approach to AI regulation, which focuses on commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement.
California Amends CCPA to Include Neural Data: California Governor Newsom approved SB1223, adding “neural data” to the list of sensitive personal information covered by the California Consumer Privacy Act (“CCPA”). “Neural data” is defined as “information that is generated by measuring the activity of a consumer’s central or peripheral nervous system, and that is not inferred from nonneural information.” SB 1223 is intended to address the emergence of consumer neurotechnologies such as neuromonitoring devices, cognitive training applications, neurostimulation devices, mental health apps, and so-called “brain wearables.” California also passed AB 1008, which further amends the definition of “personal information” under the CCPA to clarify that personal information can exist in various formats, including physical, digital, and “abstract digital formats.” Abstract digital formats include encrypted files, metadata, and artificial intelligence systems that are capable of outputting personal information.
CPPA Issues Enforcement Advisory on Dark Patterns: The California Privacy Protection Agency (“CPPA”) issued an Enforcement Advisory on the topic of dark patterns. The term ‘dark patterns’ refers to user interface settings and functionality that subvert or impair consumers’ autonomy, decision-making, or choice when consenting to or otherwise making decisions regarding the privacy of their personal information. The four-page advisory includes key definitions to help businesses understand their obligations under the law and ask questions to determine whether they may have inadvertently implemented dark patterns on their websites or product offerings. It also includes a series of examples to help businesses identify designs that may violate the CCPA’s prohibition on the use of dark patterns. The Advisory emphasizes the importance of reviewing the design of user interfaces to ensure that they offer symmetrical choices, clear, easy-to-use understandable language, and statutorily required disclosures. The publication of this advisory portends dark pattern enforcement actions on the horizon.
California Governor Vetoes Mobile/Browser Opt-Out Bill: California Governor Newsom vetoed AB 3048, a bill that proposed amending the CCPA to require major technology companies to install “opt-out” preference signals in companies’ mobile operating systems and web browsers. The proposed legislation was aimed to allow consumers interacting with businesses online to easily configure their privacy settings and signal to such businesses their intent to (1) opt out of the selling or sharing of their personal information and/or (2) limit the use of their sensitive personal information. In his message accompanying the veto, Governor Newsom cited concerns about the burden compliance would place on operating system developers, as no major mobile operating system currently incorporates this opt-out option, and noted that internet users are already able to exercise their opt-out rights through existing functionalities and downloadable plug-ins.
FEDERAL LAWS & REGULATION
House Committee Approves Children’s Privacy Bills: The House Committee on Energy and Commerce approved amended versions of the Children and Teens’ Online Privacy Protection Act and the Kids Online Safety Act for House floor consideration. However, the proposed amendments highlight challenges to passing the bills in the current session. Energy and Commerce Ranking Member Frank Pallone, D-N.J., stated that the substantial amendments in the approved bills leave stakeholders with little time to identify and address the consequences of the changes. The amendments also mean that the House version will now need to be reconciled with the Senate versions of the bills, which have been approved by the Senate in an omnibus package. The respective chambers have less than 25 working days in session following the election to make progress on the children’s privacy bills.
NIST Releases Updated Guidelines for Password Security: The National Institute for Standards and Technology (“NIST”) released updated guidelines for password security in its second public draft of revision four to NIST Special Publication 800-63, Digital Identity Guidelines (“SP 800-63”). SP 800-63B, relating to authentication and authenticator management, and one of the four volumes of SP 800-63, recommends a number of changes to password best practices. Among the most notable changes is that NIST is moving away from recommending password complexity requirements such as mixing uppercase and lowercase letters, numbers, and special characters. Instead, the focus has shifted to password length as the primary factor in password strength. NIST also no longer recommends mandatory periodic password changes, arguing that frequent password resets often lead to weaker passwords and encourages users to make minor, predictable changes. Instead, passwords should only be changed when there’s evidence of compromise. The guidelines also recommend the use of multi-factor authentication wherever possible. NIST is requesting comments on the updated guidelines by October 7, 2024.
CISA and HHS Publish Advisory on RansomHub: The Federal Bureau of Investigation (“FBI”), the Cybersecurity and Infrastructure Security Agency (“CISA”), the Multi-State Information Sharing and Analysis Center (“MS-ISAC”), and the Department of Health and Human Services (“HHS”) published a Joint Advisory to disseminate known RansomHub ransomware indicators of compromise and response tactics, techniques, and procedures. According to the Advisory, since its inception in February 2024, RansomHub has encrypted and exfiltrated data from at least 210 victims across industries, including water and wastewater, information technology, government services and facilities, healthcare and public health, emergency services, food and agriculture, financial services, commercial facilities, critical manufacturing, transportation, and communications critical infrastructure. The Advisory contains a series of mitigation recommendations to reduce the likelihood and impact of ransomware incidents.
U.S. LITIGATION
Utah Federal Court Halts Social Media Law Over First Amendment Concerns: A federal judge has temporarily blocked the Utah Minor Protection in Social Media Act due to potential First Amendment concerns. The legislation, which required age verification and limited minors’ social media access, was challenged by NetChoice, a tech coalition including TikTok and Meta. The court found that the law likely infringed on free speech by imposing unwarranted content-based restrictions. This decision is consistent with a wider trend of courts overturning similar legislation in other states. Another law permitting parents to sue over social media-related mental health issues will still take effect.
Texas Sues Federal Government over HIPAA Rules: The State of Texas has brought a suit against HHS challenging two rules under the Health Insurance Portability and Insurance Act (“HIPAA”). The suit first alleges HHS exceeded its statutory authority in passing a new rule limiting how covered entities can use and disclose personal health information related to reproductive healthcare. This new rule specifically prohibits covered entities from using and disclosing Protected Health Information (“PHI”) intended for the purpose of investigating or imposing liability on any individual seeking, obtaining, providing, or facilitating reproductive health care. The suit also challenges a privacy rule issued in 2000 that restricts covered entities from disclosing PHI in response to an administrative subpoena, civil investigative demand, or other similar requests. In its complaint, Texas argues that the two HIPAA rules inhibit the state’s ability to subpoena HIPAA-covered entities and thus “lack statutory authority and are arbitrary and capricious.”
U.S. ENFORCEMENT
Texas Attorney General Settles AI-Related Enforcement Action: The Texas Attorney General announced a settlement with Pieces Technologies (“Pieces”) for allegedly making a series of false and misleading statements about the accuracy and safety of its generative AI products used by Texas hospitals. The Texas hospitals provided their patients’ healthcare data in real-time to Pieces so that Pieces’ generative AI products could summarize patients’ conditions and treatments for hospital staff. The Texas Attorney General found that Pieces developed a series of metrics to make misleading claims that its generative AI products were “highly accurate” (e.g., claimed an error rate or “severe hallucination rate” of “<1 per 100,000”). The settlement requires Pieces to accurately disclose the extent of its products’ accuracy and ensure that the hospital staff using its generative AI products to treat patients understand the extent to which they should or should not rely on its products.
FTC Announces Enforcement Sweep Relating to Deceptive AI Claims: The Federal Trade Commission (“FTC”) announced that it has started enforcement actions against five companies it accuses of using AI to “supercharge deceptive and unfair conduct that harms consumers.” The cases include actions against a company promoting an AI tool that allegedly enabled its customers to create fake reviews, a company claiming to sell “AI Lawyer” services, and multiple companies claiming that they could use AI to help consumers make money through online storefronts. The enforcement sweep builds on at least five prior FTC enforcement actions relating to claims about and use of AI.
DOJ and State AGs Allege Data Sharing Violates Antitrust Laws: The U.S. Justice Department (“DOJ”) and eight states filed suit in the U.S. District Court for the Middle District of North Carolina against RealPage Inc. (“RealPage”), a property management software company, for alleged antitrust violations. The complaint contends that RealPage trains its algorithmic software by using current, nonpublic, and competitively-sensitive apartment rental pricing data provided by landlords contracted with RealPage. The RealPage software then generates future pricing recommendations and other terms for participating landlords based on the shared data. The government enforcers contend that this pooling of current data and subsequent adoption of RealPage’s recommended prices and other terms results in participating landlords coordinating on prices and artificially inflating apartment rental rates, which allegedly harms millions of renters across the United States. The complaint further claims that RealPage has monopolized or has attempted to monopolize the market for commercial revenue management software. The enforcers seek an order to enjoin the alleged unlawful coordination and restore competitive conditions in the affected market.
FTC and DOJ Settle with Verkada for Data Breach: The FTC and DOJ announced a settlement with Verkada, Inc. (“Verkada”), a security camera firm, for data breaches the company experienced. Verkada experienced at least two data breaches between December 2020 and March 2021, where hackers were able to access customer data and Verkada’s internet-connected security cameras, including sensitive video footage. The complaint alleged Verkada: (1) failed to implement appropriate information security practices; (2) violated the Controlling the Assault of Non-Solicited Pornography and Marketing Act (“CAN-SPAM Act”) by sending commercial emails without the option to opt-out; and (3) misled customers with respect to its compliance with HIPAA and the EU-U.S. and Swiss-U.S. Privacy Shield framework. Under the proposed settlement, Verkada is required to pay $2.95 million and develop and implement a comprehensive information security program. The settlement also prohibits Verkada from violating the CAN-SPAM and misrepresenting its privacy and data security practices.
FCC Reaches $13 Million Settlement over Data Breach: The Federal Communications Commission (“FCC”) has reached a $13 million settlement with AT&T following a January 2023 data breach involving a third-party vendor’s cloud environment, which exposed data of approximately 8.9 million customers. The breach occurred when a vendor contracted by AT&T to host personalized videos failed to delete or return customer data as required by contract. Hackers accessed AT&T’s system through the vendor, compromising account lines, bill balances, and rate plan details from 2015-2017, though sensitive information, like credit card numbers and Social Security numbers, was not affected. The FCC’s investigation highlighted failures in cybersecurity, privacy, and vendor management, resulting in a consent decree that mandates enhanced data governance, comprehensive information security measures, and stricter vendor oversight. AT&T must now improve data tracking, enforce vendor compliance with data retention rules, and conduct annual compliance audits. This settlement underscores the importance of protecting consumer data and holding carriers accountable under the Communications Act. More information can be found in the FCC’s official release.
HHS Announces Settlement of Ransomware Investigation: HHS’ Office for Civil Rights (“OCR”) announced a settlement with Cascade Eye and Skin Centers, P.C. (“Cascade”). OCR initiated its investigation following receipt of a complaint alleging that Cascade had experienced a ransomware attack. OCR determined through its investigation that approximately 291,000 files containing electronic PHI (“ePHI”) were affected by the attack. OCR alleged that Cascade violated the HIPAA Security Rule by, among other things, failing to conduct a compliant risk analysis to determine the potential risks and vulnerabilities to ePHI in its systems, and failing to properly monitor its health information systems’ activity to protect against a cyber-attack. Under the terms of the settlement, Cascade has paid $250,000 to OCR and will implement a corrective action plan that requires Cascade to take steps toward protecting and securing the security of PHI. OCR will monitor the corrective action plan for two years.
FCC and the Office of the Privacy Commissioner of Canada Sign MOU on Privacy Enforcement: The United States Federal Communications Commission Chairwoman, Jessica Rosenworcel, signed a Memorandum of Understanding (“MOU”) with Privacy Commissioner of Canada Philippe Dufresne to strengthen information sharing and enforcement cooperation between the two regulators. The MOU establishes the parameters for the two regulators to exchange information in order to enforce compliance with laws in both countries and to share knowledge and expertise on regulatory policies and technical efforts. This strategic partnership is designed to strengthen efforts to protect consumers and ensure their fundamental rights to privacy and facilitate large-scale investigations of unlawful privacy practices. Although the MOU does not specifically address AI technologies, it may pave the way for enhanced coordination and information sharing in the development of targeted AI regulations or the application of existing regulations to AI technologies.
BIOMETRIC PRIVACY
Illinois Federal Court Upholds BIPA amid COPPA Concerns: An Illinois federal court ruled that the state’s Biometric Information Privacy Act (“BIPA”) is not preempted by the federal Children’s Online Privacy Protection Act (“COPPA”). The case—Hartman et al v. Meta Platforms, Inc., —concerns Meta’s use of augmented reality (“AR”) filters in its Messenger apps, with allegations of mishandling facial geometry data potentially violating BIPA. The court determined COPPA’s regulation of children’s personal information is distinct from BIPA’s focus on biometric data, allowing both laws to operate concurrently. Meta’s preemption argument was rejected, enabling the case to move forward under Illinois law and upholding the state’s power to enforce its regulations dealing with privacy. The court’s decision relied on differentiating the regulatory scopes of BIPA and COPPA, underscoring the importance of compliance with state privacy laws.
Illinois Appellate Court Rules No BIPA Exception for OTC Glasses: An Illinois appellate court answered the certified question prompted by Marino et al. v. Gunnar Optiks LLC, concluding that “[a]n individual who is trying on nonprescription sunglasses - unconnected to any specific medical advice, prescription, or need - is simply not within [the] statutory exclusion” under the state’s Biometric Information Privacy Act (the “Act”), and acknowledging that the several opposing federal court decisions issued last year “are not binding on this court.” Specifically, the court held that a person providing his/her biometric identifier(s) to obtain products categorized as Class 1 “medical devices” by the U.S. Food and Drug Administration does not become a “patient in a healthcare setting” protected under the Act. This decision clarifies the scope of when online “try-on” functionalities offered by many retailers may be subject to the Act.
INTERNATIONAL LAW & REGULATION
Brazil Data Protection Authority Publishes International Data Transfer Regulation: Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados (“ANPD”), published its international data transfer regulation. Brazil’s General Personal Data Protection Law (“LGPD”) allows international transfer of personal data under specific circumstances and legally defined mechanisms. Many of these mechanisms were not fully developed under the LGPD and were unable to be implemented until the appropriate regulations were published. The ANPD’s data transfer regulation will now allow data controllers to rely on a wider variety of mechanisms to transfer personal data across national borders. Among these are transfers based on adequacy decisions and contractual protections known as standard contractual clauses. Organizations should review the international data transfer regulation and their international data transfers to select the most appropriate legal mechanism.
First Global AI Treaty Open for Signature: The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (CETS No. 225) was opened for signature during a conference for the Council of Europe Ministers of Justice. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy, and the rule of law. The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America, and the European Union. The treaty requires signatories to maintain appropriate legislative, administrative, or other measures that take into account certain defined principles, such as human dignity and individual autonomy, transparency and oversight, accountability and responsibility, equality and non-discrimination, and privacy and personal data protection. The treaty is intended to provide a legal framework covering the entire lifecycle of AI systems, be technology neutral, and promote innovation while managing the risks it may pose to human rights, democracy, and the rule of law.
Dutch Data Protection Authority Fines Clearview AI for Unauthorized Data Collection: Autoriteit Persoonsgevens, the Dutch Data protection authority, fined Clearview AI 30.5 million euros for EU General Data Protection Regulation (“GDPR”) violations, with the potential for an additional incremental fine of 5 million euros in the event cited violations are not corrected. The Dutch protection authority stated that Clearview AI illegally collected photographs from the internet and converted them into biometric identifiers based on each person’s facial geometry. The Dutch data protection authority also stated that Clearview AI does not provide sufficient notice to individuals that their data is in the Clearview AI database and does not cooperate with data subject access requests.
Australia Introduces Privacy Act Reforms: Amendments to Australia’s Privacy Act of 1988 were introduced in the Australian parliament. Proposed updates include a number of enforcement measures that criminalize the harmful misuse of personal data and create a tiered civil penalty regime. The amendments also include authorization for a Children’s Online Privacy Code to be drafted by the Office of the Australian Privacy Commissioner. Additional amendments are expected to be introduced in 2025. Australian Privacy Commissioner Carly Kind said in a statement that such future amendments could propose “a new positive obligation that personal information handling is fair and reasonable” and measures to “ensure all Australian organizations build the highest levels of security into their operations.”
RECENT PUBLICATIONS AND MEDIA COVERAGE
Blank Rome partner Alex Nisenbaum appeared on this podcast discussing the challenges and opportunities with both artificial intelligence and machine learning.
Blank Rome partners Harrison Brown, Ana Tagvoryan and Erica Graves authored this article discussing the lawsuits filed in the wake of California’s newly passed “Drip Pricing” law.
Attack of the (Voice) Clones: Protecting the Right to Your Voice
Blank Rome partner Jeff Rosenthal and associate Timothy Miller authored this article discussing the legal developments surrounding the use of vocal cloning thorough artificial intelligence tools.
© 2024 Blank Rome LLP. All rights reserved. Please contact Blank Rome for permission to reprint. Notice: The purpose of this update is to identify select developments that may be of interest to readers. The information contained herein is abridged and summarized from various sources, the accuracy and completeness of which cannot be assured. This update should not be construed as legal advice or opinion, and is not a substitute for the advice of counsel.