New York University's Cybersecurity for Democracy (C4D) has released a technical standard for universal digital ad transparency in collaboration with researchers from Bowdoin College, Washington State University, Wesleyan University and Mozilla. The standard defines the criteria for determining which ad platforms would need to comply and what content should be made transparent. Additionally, it compares the proposed standard to the platforms’ current voluntary transparency practices, as well as to related U.S. legislative efforts. The need for a standard is long overdue as digital ads can be served to a single user at a time. Unlike broadcast communication, which is seen by many people at a time, highly tailored and manipulative messaging can be aimed at an individual without detection. Such microtargeted delivery can be and has been used to disseminate medical disinformation; promote scams, predatory, discriminatory advertising practices; and promote content that is divisive and violent. Many platforms, including Facebook and Google, use limited to no human review of ads that are submitted for promotion on their platforms. Instead, they rely on inherently porous AI-based content filters that outsource content moderation to users. This lack of standards allows political and health misinformation, hate speech, and other abusive content to bypass algorithms and proliferate through paid promotion (from which platforms profit).
The proposal calls for all major digital ad platforms to disclose an ad’s impressions, targeting placement, delivery, and creative data to the public. It also pushes for disclosures around content removal details and decision-making processes. The FTC would store this data in public databases available for up to seven years to users and researchers. The standard would apply to any social media company that uses microtargeting in advertising, does not use human review of advertisements, or has a reach exceeding one-third of the United States adult population.