Label AI-Generated Ads for Transparency
About this video
Check out this video I made with revid.ai
Try the PDF to Brainrot
Create your own version in minutes
Video Transcript
Full text from the video
An Evaluation of Potential Rules on AI-Generated Content in Advertising With the rapid advancement of generative artificial intelligence, companies have been quick to adopt this technology in order to further optimize their advertising strategies. While this sort of technological advancement has revolutionized workflows to some extent, the end result is not all positive. Made by using generative technology to quickly iterate upon media, these advertisements pose a novel problem to consumers—they are uniquely deceptive, whether the advertisers intend it to be or not. Research has demonstrated that consumers are unable to correctly identify AI-generated content in advertising, and have a negative reaction to it once they are made aware of it. This difference in perception to undisclosed AI content shows that consumers view these advertisements as manipulating their trust. The Federal Trade Commission should establish regulations that require this sort of disclosure in advertising, if only to protect consumers from engaging with content they otherwise would avoid. Without disclosure requirements, companies pushing these advertisements using AI-generated content are systematically undermining the autonomy of consumers and violating the basic principles of consumer protection. In advertising, material information is defined as any detail that would have a reasonable impact on a consumer’s decision to purchase or engage with a product.. This can be anything from something small, like taxes on the purchase of the product, to something important, like potential side effects of medication. This information is required to be disclosed in most cases when working with advertisements, introducing things like the common asterisks next to prices and lengthy speeches about side effects in commercials. Studies have shown that the phrase “material information” can be applied to AI-generated content in advertising, too. In a study on the effects of disclosure of this content, consumers were shown to have an unfavorable attitude towards ads, with a substantial effect on their perceived credibility of the brand itself (Baek et al.). This effect shows that the usage of AI-generated content in advertisements is material information, and should be disclosed to consumers to allow them to make informed decisions about the advertisements and products they interact with. In recent years, the Federal Trade Commission has acknowledged the dangers of fake content being used to advertise products with its final rule banning fake testimonials and reviews, including those generated by AI (Federal Trade Commission). Precedent for potential legislation is important, especially when it comes to a problem as fast-growing as this one. In late 2024, the European Union passed a law requiring companies to disclose the use of AI-generated content, mainly in the form of deepfakes, being used in advertisements (“Article 50: Transparency Obligations”). The passing of this act and its further implementation in 2026 demonstrates that disclosure requirements aren’t out of the question. Furthermore, the Federal Communications Commission has a proposed rule that requires disclosure of AI-generated content in all broadcasted political advertisements (Federal Communications Commission). This, combined with the FTC rule, shows that United States regulatory bodies acknowledge the importance of transparency when it comes to deceptive AI-generated content being used in advertising. All three of these combined demonstrates a pattern that supports disclosure requirements, especially when the timing of them being so close together is considered. Consumer opinions are important for any business to consider, especially when it comes to advertising products. The disclosure of AI-generated content might seem to have a negative effect on brand perception, but the opposite is true in some cases. Research has shown that while disclosing this content initially lowers perceived credibility, this effect can be mitigated when the content is seen as more human-like, as opposed to machine-like (Baek et al.). Furthermore, high-quality AI-generated content can still be effective, even with disclosure. The disclosure of this content doesn’t always cause negative brand impact of the content itself is high-quality (Whittaker, et al.), which shows that brands are still able to use AI-generated content in a way that is correlated with a positive appraisal by consumers. Non-disclosure of AI-generated content that consumers find out about afterwards has been associated with a negative brand perception (NielsenIQ), further encouraging disclosure of the content to limit the impact on brands. Deepfakes are pieces of media, usually videos or images, altered with the intent to deceptively impersonate another person. With the rapid expansion of artificial intelligence, deepfakes are becoming more prevalent and complex, to the point where it has become a term synonymous with something nearly indistinguishable from reality. Deepfakes are the most obvious source of deceptive AI-generated content in advertising, to the point where there is already legislation specifically targeting deepfaked content in political advertisements put into place by the FCC (Federal Communications Commission). Both regulatory bodies and supporters of AI-generated content in advertising recognize the risks of deepfakes. One such example of supporters recognizing the risks of deepfakes is a literature review done by researchers with the intention of building a framework to understand consumer responses to manipulated advertising (Campbell et al.). Deepfakes are an extremely clear and easy to understand problem, one that a vast majority of consumers agree on. The fact that these deepfakes are becoming more sophisticated through the usage of AI and then being used in advertising supports disclosure requirements. For these disclosure requirements to be effective, companies need a framework to help them understand potential consumer responses to AI-generated content in advertising. Advertising researchers have provided this framework in the form of a literature review that targets deepfakes and AI-generated ads, analyzing consumer opinions from the lens of key factors like ad falsity, consumer response, and originality (Campbell et al.). This framework gives advertisers a tool to predict the general consumer responses to AI-generated content or deepfakes being used. In this framework, researchers demonstrate ways to effectively advertise using AI-generated content in a way that doesn’t alienate consumers due to perceived credibility, helping make sense of consumer trust and reactions to content that may be seen as deceptive had it not been disclosed. Transparency through disclosure reduces the likelihood of consumers seeing advertisements as deceptive, especially if the content itself is high-quality as this paper has demonstrated earlier. Disclosing AI-generated content is a better outcome for brands as opposed to nondisclosure and negatively impacted brand perception, as it reduces the likelihood of consumers seeing advertisements as deceptive. The processing of enforcing such disclosure requirements require resources and planning, but it has been shown to be feasible. There are already laws put into place that target deceptive AI-generated content in advertising, like that of the European Union’s Artificial Intelligence Act (“Article 50: Transparency Obligations”). The act specifically targets deepfakes used in advertising, and provides solid precedent for the same to be done in the United States. As the disclosure process has already been done internationally, the next step of implementing such disclosure rules in the United States is relatively straightforward in comparison to pioneering such laws. Critics of potential legislation, however, do exist. A common concern to be addressed is that of small businesses’ ability to adapt to such requirements, especially startups that already use AI-generated content in their advertising, and would have to change their marketing strategies. This concern can be addressed through a multi-step implementation process, as well as allowing a large time buffer until actual implementation like that of the EU AI Act. The time from the EU AI Act passing to actual entry into force is roughly two years, allowing more time for at-risk small businesses to adapt to the changes, which have been studied extensively to understanding the full implications of its implementation (Ivković et al.). Furthermore, other contexts for implementation have already been addressed by United States regulatory bodies, like that of the FCC’s disclosure requirements for AI-generated content in broadcasted political advertisements. Another concern that is commonly cited is that disclosure requirements may alienate consumers from businesses. This concern has already been addressed, in part, by the study mentioned earlier about consumer responses. Research has shown that consumers are capable of making informed decisions about AI usage in advertising, and can even have positive opinions about AI-generated content being used, if it is disclosed beforehand (Baek et al.). Building trust with consumers through disclosure results in a positive brand perception in many cases, which directly contradicts the concern that consumers may be alienated. Furthermore, the contrary has been shown to be true, showing that consumer trust is actually negatively impacted when AI-generated content is not disclosed, but instead discovered by the consumer themselves. This shows that disclosure is oftentimes in the advertiser’s best interest, despite what some critics claim. In order for legislation to be effective, it must be defined in strict terms. These strict terms must define the penalties, actors, and the act itself—but for the purposes of this paper, focusing on the act itself is sufficient. Effective enforcement of disclosure requirements, for example, need a strict definition of AI-generated content. This definition must make the distinction between using artificial intelligence as a tool to assist the user, or as a purely generative creator. For example, spell check is a tool to assist a human user, but generative AI that creates the content we’re focusing on here is a purely generative creator. The distinction between these two is important, as requiring disclosure for spell check or similar tools is both unnecessary and unrealistic. To further define this generated content, we look to the EU AI Act to be supplied with a helpful definition. Article 50 of the EU AI Act states that “Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.” (“Article 50: Transparency Obligations”). This definition proves helpful in understanding exactly what kind of content potential legislation in the United States should be targeting, and exactly who it should concern. There are further considerations to be made about the context in which this potential future legislation is implemented. Different mediums of advertising require different approaches, especially when you consider the differences between audio and visual disclosure. The Federal Communications Commission’s focus on political broadcast media demonstrates an approach that is specific, reasonable, and enforceable—all qualities that are paramount in defining legislation (Federal Communications Commission). The different kind of media and their specific implementations require some thought for those three qualities to be seen in potential legislation. For example, high-risk types of media require more obvious disclosure, especially in the case of something like deepfakes. Deepfakes are, of course, a higher risk factor for advertisements since they can show real people saying or doing things that might damage a brand or endorse another in some way that is deceptive. The specific way the disclosure requirements are implemented depends heavily upon context and has to be adequately handled by the administration in charge of implementation, which would be the Federal Trade Commission in this example. With there being currently proposed legislation relating to this very topic, it’s important to consider the scope limitations of these potential solutions. Legislation proposed or implemented by United States regulatory bodies currently lacks coverage for all aspects of advertising—the Federal Communication Commission’s proposed rule only covers AI-generated content in broadcasted political advertising, for example (Federal Communications Commission). This reveals a significant gap in consumer protection, and shows the currently proposed legislation is unlikely to cover consumers in most instances. With this considered, the current rate of implementation overall tracks with the EU AI Act’s proposal, as they began by focusing on high-risk concerns. The EU AI Act focuses on deepfakes in advertising as a whole, while the FCC’s proposal focuses on political advertisements. These are both high-risk and give precedent for phased implementation of tighter disclosure requirements, which can be built off the current proposals being worked on by United States regulatory bodies. In conclusion, the Federal Trade Commission must establish comprehensive disclosure requirements for AI-generated content in advertising to protect the general consumers’ right to make informed decisions about the content they engage with. As this paper has shown, existing precedent from the EU AI Act, the FCC’s proposed rules for political advertisements, and the FTC’s own ban on fake testimonials all demonstrate that transparency for AI-generated content in advertising is both necessary and feasible. When consumers cannot distinguish between AI-generated content and human-created content, and it would have affected their decision to engage with the advertisement itself, it compromises their trust and right to make an informed decision. The FTC must protect consumers from advertisers that have not self-regulated into disclosing this content. Now that precedent has been set both domestically and internationally, the time for comprehensive regulation has arrived. An Evaluation of Potential Rules on AI-Generated Content in Advertising With the rapid advancement of generative artificial intelligence, companies have been quick to adopt this technology in order to further optimize their advertising strategies. While this sort of technological advancement has revolutionized workflows to some extent, the end result is not all positive. Made by using generative technology to quickly iterate upon media, these advertisements pose a novel problem to consumers—they are uniquely deceptive, whether the advertisers intend it to be or not. Research has demonstrated that consumers are unable to correctly identify AI-generated content in advertising, and have a negative reaction to it once they are made aware of it. This difference in perception to undisclosed AI content shows that consumers view these advertisements as manipulating their trust. The Federal Trade Commission should establish regulations that require this sort of disclosure in advertising, if only to protect consumers from engaging with content they otherwise would avoid. Without disclosure requirements, companies pushing these advertisements using AI-generated content are systematically undermining the autonomy of consumers and violating the basic principles of consumer protection. In advertising, material information is defined as any detail that would have a reasonable impact on a consumer’s decision to purchase or engage with a product.. This can be anything from something small, like taxes on the purchase of the product, to something important, like potential side effects of medication. This information is required to be disclosed in most cases when working with advertisements, introducing things like the common asterisks next to prices and lengthy speeches about side effects in commercials. Studies have shown that the phrase “material information” can be applied to AI-generated content in advertising, too. In a study on the effects of disclosure of this content, consumers were shown to have an unfavorable attitude towards ads, with a substantial effect on their perceived credibility of the brand itself (Baek et al.). This effect shows that the usage of AI-generated content in advertisements is material information, and should be disclosed to consumers to allow them to make informed decisions about the advertisements and products they interact with. In recent years, the Federal Trade Commission has acknowledged the dangers of fake content being used to advertise products with its final rule banning fake testimonials and reviews, including those generated by AI (Federal Trade Commission). Precedent for potential legislation is important, especially when it comes to a problem as fast-growing as this one. In late 2024, the European Union passed a law requiring companies to disclose the use of AI-generated content, mainly in the form of deepfakes, being used in advertisements (“Article 50: Transparency Obligations”). The passing of this act and its further implementation in 2026 demonstrates that disclosure requirements aren’t out of the question. Furthermore, the Federal Communications Commission has a proposed rule that requires disclosure of AI-generated content in all broadcasted political advertisements (Federal Communications Commission). This, combined with the FTC rule, shows that United States regulatory bodies acknowledge the importance of transparency when it comes to deceptive AI-generated content being used in advertising. All three of these combined demonstrates a pattern that supports disclosure requirements, especially when the timing of them being so close together is considered. Consumer opinions are important for any business to consider, especially when it comes to advertising products. The disclosure of AI-generated content might seem to have a negative effect on brand perception, but the opposite is true in some cases. Research has shown that while disclosing this content initially lowers perceived credibility, this effect can be mitigated when the content is seen as more human-like, as opposed to machine-like (Baek et al.). Furthermore, high-quality AI-generated content can still be effective, even with disclosure. The disclosure of this content doesn’t always cause negative brand impact of the content itself is high-quality (Whittaker, et al.), which shows that brands are still able to use AI-generated content in a way that is correlated with a positive appraisal by consumers. Non-disclosure of AI-generated content that consumers find out about afterwards has been associated with a negative brand perception (NielsenIQ), further encouraging disclosure of the content to limit the impact on brands. Deepfakes are pieces of media, usually videos or images, altered with the intent to deceptively impersonate another person. With the rapid expansion of artificial intelligence, deepfakes are becoming more prevalent and complex, to the point where it has become a term synonymous with something nearly indistinguishable from reality. Deepfakes are the most obvious source of deceptive AI-generated content in advertising, to the point where there is already legislation specifically targeting deepfaked content in political advertisements put into place by the FCC (Federal Communications Commission). Both regulatory bodies and supporters of AI-generated content in advertising recognize the risks of deepfakes. One such example of supporters recognizing the risks of deepfakes is a literature review done by researchers with the intention of building a framework to understand consumer responses to manipulated advertising (Campbell et al.). Deepfakes are an extremely clear and easy to understand problem, one that a vast majority of consumers agree on. The fact that these deepfakes are becoming more sophisticated through the usage of AI and then being used in advertising supports disclosure requirements. For these disclosure requirements to be effective, companies need a framework to help them understand potential consumer responses to AI-generated content in advertising. Advertising researchers have provided this framework in the form of a literature review that targets deepfakes and AI-generated ads, analyzing consumer opinions from the lens of key factors like ad falsity, consumer response, and originality (Campbell et al.). This framework gives advertisers a tool to predict the general consumer responses to AI-generated content or deepfakes being used. In this framework, researchers demonstrate ways to effectively advertise using AI-generated content in a way that doesn’t alienate consumers due to perceived credibility, helping make sense of consumer trust and reactions to content that may be seen as deceptive had it not been disclosed. Transparency through disclosure reduces the likelihood of consumers seeing advertisements as deceptive, especially if the content itself is high-quality as this paper has demonstrated earlier. Disclosing AI-generated content is a better outcome for brands as opposed to nondisclosure and negatively impacted brand perception, as it reduces the likelihood of consumers seeing advertisements as deceptive. The processing of enforcing such disclosure requirements require resources and planning, but it has been shown to be feasible. There are already laws put into place that target deceptive AI-generated content in advertising, like that of the European Union’s Artificial Intelligence Act (“Article 50: Transparency Obligations”). The act specifically targets deepfakes used in advertising, and provides solid precedent for the same to be done in the United States. As the disclosure process has already been done internationally, the next step of implementing such disclosure rules in the United States is relatively straightforward in comparison to pioneering such laws. Critics of potential legislation, however, do exist. A common concern to be addressed is that of small businesses’ ability to adapt to such requirements, especially startups that already use AI-generated content in their advertising, and would have to change their marketing strategies. This concern can be addressed through a multi-step implementation process, as well as allowing a large time buffer until actual implementation like that of the EU AI Act. The time from the EU AI Act passing to actual entry into force is roughly two years, allowing more time for at-risk small businesses to adapt to the changes, which have been studied extensively to understanding the full implications of its implementation (Ivković et al.). Furthermore, other contexts for implementation have already been addressed by United States regulatory bodies, like that of the FCC’s disclosure requirements for AI-generated content in broadcasted political advertisements. Another concern that is commonly cited is that disclosure requirements may alienate consumers from businesses. This concern has already been addressed, in part, by the study mentioned earlier about consumer responses. Research has shown that consumers are capable of making informed decisions about AI usage in advertising, and can even have positive opinions about AI-generated content being used, if it is disclosed beforehand (Baek et al.). Building trust with consumers through disclosure results in a positive brand perception in many cases, which directly contradicts the concern that consumers may be alienated. Furthermore, the contrary has been shown to be true, showing that consumer trust is actually negatively impacted when AI-generated content is not disclosed, but instead discovered by the consumer themselves. This shows that disclosure is oftentimes in the advertiser’s best interest, despite what some critics claim. In order for legislation to be effective, it must be defined in strict terms. These strict terms must define the penalties, actors, and the act itself—but for the purposes of this paper, focusing on the act itself is sufficient. Effective enforcement of disclosure requirements, for example, need a strict definition of AI-generated content. This definition must make the distinction between using artificial intelligence as a tool to assist the user, or as a purely generative creator. For example, spell check is a tool to assist a human user, but generative AI that creates the content we’re focusing on here is a purely generative creator. The distinction between these two is important, as requiring disclosure for spell check or similar tools is both unnecessary and unrealistic. To further define this generated content, we look to the EU AI Act to be supplied with a helpful definition. Article 50 of the EU AI Act states that “Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.” (“Article 50: Transparency Obligations”). This definition proves helpful in understanding exactly what kind of content potential legislation in the United States should be targeting, and exactly who it should concern. There are further considerations to be made about the context in which this potential future legislation is implemented. Different mediums of advertising require different approaches, especially when you consider the differences between audio and visual disclosure. The Federal Communications Commission’s focus on political broadcast media demonstrates an approach that is specific, reasonable, and enforceable—all qualities that are paramount in defining legislation (Federal Communications Commission). The different kind of media and their specific implementations require some thought for those three qualities to be seen in potential legislation. For example, high-risk types of media require more obvious disclosure, especially in the case of something like deepfakes. Deepfakes are, of course, a higher risk factor for advertisements since they can show real people saying or doing things that might damage a brand or endorse another in some way that is deceptive. The specific way the disclosure requirements are implemented depends heavily upon context and has to be adequately handled by the administration in charge of implementation, which would be the Federal Trade Commission in this example. With there being currently proposed legislation relating to this very topic, it’s important to consider the scope limitations of these potential solutions. Legislation proposed or implemented by United States regulatory bodies currently lacks coverage for all aspects of advertising—the Federal Communication Commission’s proposed rule only covers AI-generated content in broadcasted political advertising, for example (Federal Communications Commission). This reveals a significant gap in consumer protection, and shows the currently proposed legislation is unlikely to cover consumers in most instances. With this considered, the current rate of implementation overall tracks with the EU AI Act’s proposal, as they began by focusing on high-risk concerns. The EU AI Act focuses on deepfakes in advertising as a whole, while the FCC’s proposal focuses on political advertisements. These are both high-risk and give precedent for phased implementation of tighter disclosure requirements, which can be built off the current proposals being worked on by United States regulatory bodies. In conclusion, the Federal Trade Commission must establish comprehensive disclosure requirements for AI-generated content in advertising to protect the general consumers’ right to make informed decisions about the content they engage with. As this paper has shown, existing precedent from the EU AI Act, the FCC’s proposed rules for political advertisements, and the FTC’s own ban on fake testimonials all demonstrate that transparency for AI-generated content in advertising is both necessary and feasible. When consumers cannot distinguish between AI-generated content and human-created content, and it would have affected their decision to engage with the advertisement itself, it compromises their trust and right to make an informed decision. The FTC must protect consumers from advertisers that have not self-regulated into disclosing this content. Now that precedent has been set both domestically and internationally, the time for comprehensive regulation has arrived. asdAn Evaluation of Potential Rules on AI-Generated Content in Advertising With the rapid advancement of generative artificial intelligence, companies have been quick to adopt this technology in order to further optimize their advertising strategies. While this sort of technological advancement has revolutionized workflows to some extent, the end result is not all positive. Made by using generative technology to quickly iterate upon media, these advertisements pose a novel problem to consumers—they are uniquely deceptive, whether the advertisers intend it to be or not. Research has demonstrated that consumers are unable to correctly identify AI-generated content in advertising, and have a negative reaction to it once they are made aware of it. This difference in perception to undisclosed AI content shows that consumers view these advertisements as manipulating their trust. The Federal Trade Commission should establish regulations that require this sort of disclosure in advertising, if only to protect consumers from engaging with content they otherwise would avoid. Without disclosure requirements, companies pushing these advertisements using AI-generated content are systematically undermining the autonomy of consumers and violating the basic principles of consumer protection. In advertising, material information is defined as any detail that would have a reasonable impact on a consumer’s decision to purchase or engage with a product.. This can be anything from something small, like taxes on the purchase of the product, to something important, like potential side effects of medication. This information is required to be disclosed in most cases when working with advertisements, introducing things like the common asterisks next to prices and lengthy speeches about side effects in commercials. Studies have shown that the phrase “material information” can be applied to AI-generated content in advertising, too. In a study on the effects of disclosure of this content, consumers were shown to have an unfavorable attitude towards ads, with a substantial effect on their perceived credibility of the brand itself (Baek et al.). This effect shows that the usage of AI-generated content in advertisements is material information, and should be disclosed to consumers to allow them to make informed decisions about the advertisements and products they interact with. In recent years, the Federal Trade Commission has acknowledged the dangers of fake content being used to advertise products with its final rule banning fake testimonials and reviews, including those generated by AI (Federal Trade Commission). Precedent for potential legislation is important, especially when it comes to a problem as fast-growing as this one. In late 2024, the European Union passed a law requiring companies to disclose the use of AI-generated content, mainly in the form of deepfakes, being used in advertisements (“Article 50: Transparency Obligations”). The passing of this act and its further implementation in 2026 demonstrates that disclosure requirements aren’t out of the question. Furthermore, the Federal Communications Commission has a proposed rule that requires disclosure of AI-generated content in all broadcasted political advertisements (Federal Communications Commission). This, combined with the FTC rule, shows that United States regulatory bodies acknowledge the importance of transparency when it comes to deceptive AI-generated content being used in advertising. All three of these combined demonstrates a pattern that supports disclosure requirements, especially when the timing of them being so close together is considered. Consumer opinions are important for any business to consider, especially when it comes to advertising products. The disclosure of AI-generated content might seem to have a negative effect on brand perception, but the opposite is true in some cases. Research has shown that while disclosing this content initially lowers perceived credibility, this effect can be mitigated when the content is seen as more human-like, as opposed to machine-like (Baek et al.). Furthermore, high-quality AI-generated content can still be effective, even with disclosure. The disclosure of this content doesn’t always cause negative brand impact of the content itself is high-quality (Whittaker, et al.), which shows that brands are still able to use AI-generated content in a way that is correlated with a positive appraisal by consumers. Non-disclosure of AI-generated content that consumers find out about afterwards has been associated with a negative brand perception (NielsenIQ), further encouraging disclosure of the content to limit the impact on brands. Deepfakes are pieces of media, usually videos or images, altered with the intent to deceptively impersonate another person. With the rapid expansion of artificial intelligence, deepfakes are becoming more prevalent and complex, to the point where it has become a term synonymous with something nearly indistinguishable from reality. Deepfakes are the most obvious source of deceptive AI-generated content in advertising, to the point where there is already legislation specifically targeting deepfaked content in political advertisements put into place by the FCC (Federal Communications Commission). Both regulatory bodies and supporters of AI-generated content in advertising recognize the risks of deepfakes. One such example of supporters recognizing the risks of deepfakes is a literature review done by researchers with the intention of building a framework to understand consumer responses to manipulated advertising (Campbell et al.). Deepfakes are an extremely clear and easy to understand problem, one that a vast majority of consumers agree on. The fact that these deepfakes are becoming more sophisticated through the usage of AI and then being used in advertising supports disclosure requirements. For these disclosure requirements to be effective, companies need a framework to help them understand potential consumer responses to AI-generated content in advertising. Advertising researchers have provided this framework in the form of a literature review that targets deepfakes and AI-generated ads, analyzing consumer opinions from the lens of key factors like ad falsity, consumer response, and originality (Campbell et al.). This framework gives advertisers a tool to predict the general consumer responses to AI-generated content or deepfakes being used. In this framework, researchers demonstrate ways to effectively advertise using AI-generated content in a way that doesn’t alienate consumers due to perceived credibility, helping make sense of consumer trust and reactions to content that may be seen as deceptive had it not been disclosed. Transparency through disclosure reduces the likelihood of consumers seeing advertisements as deceptive, especially if the content itself is high-quality as this paper has demonstrated earlier. Disclosing AI-generated content is a better outcome for brands as opposed to nondisclosure and negatively impacted brand perception, as it reduces the likelihood of consumers seeing advertisements as deceptive. The processing of enforcing such disclosure requirements require resources and planning, but it has been shown to be feasible. There are already laws put into place that target deceptive AI-generated content in advertising, like that of the European Union’s Artificial Intelligence Act (“Article 50: Transparency Obligations”). The act specifically targets deepfakes used in advertising, and provides solid precedent for the same to be done in the United States. As the disclosure process has already been done internationally, the next step of implementing such disclosure rules in the United States is relatively straightforward in comparison to pioneering such laws. Critics of potential legislation, however, do exist. A common concern to be addressed is that of small businesses’ ability to adapt to such requirements, especially startups that already use AI-generated content in their advertising, and would have to change their marketing strategies. This concern can be addressed through a multi-step implementation process, as well as allowing a large time buffer until actual implementation like that of the EU AI Act. The time from the EU AI Act passing to actual entry into force is roughly two years, allowing more time for at-risk small businesses to adapt to the changes, which have been studied extensively to understanding the full implications of its implementation (Ivković et al.). Furthermore, other contexts for implementation have already been addressed by United States regulatory bodies, like that of the FCC’s disclosure requirements for AI-generated content in broadcasted political advertisements. Another concern that is commonly cited is that disclosure requirements may alienate consumers from businesses. This concern has already been addressed, in part, by the study mentioned earlier about consumer responses. Research has shown that consumers are capable of making informed decisions about AI usage in advertising, and can even have positive opinions about AI-generated content being used, if it is disclosed beforehand (Baek et al.). Building trust with consumers through disclosure results in a positive brand perception in many cases, which directly contradicts the concern that consumers may be alienated. Furthermore, the contrary has been shown to be true, showing that consumer trust is actually negatively impacted when AI-generated content is not disclosed, but instead discovered by the consumer themselves. This shows that disclosure is oftentimes in the advertiser’s best interest, despite what some critics claim. In order for legislation to be effective, it must be defined in strict terms. These strict terms must define the penalties, actors, and the act itself—but for the purposes of this paper, focusing on the act itself is sufficient. Effective enforcement of disclosure requirements, for example, need a strict definition of AI-generated content. This definition must make the distinction between using artificial intelligence as a tool to assist the user, or as a purely generative creator. For example, spell check is a tool to assist a human user, but generative AI that creates the content we’re focusing on here is a purely generative creator. The distinction between these two is important, as requiring disclosure for spell check or similar tools is both unnecessary and unrealistic. To further define this generated content, we look to the EU AI Act to be supplied with a helpful definition. Article 50 of the EU AI Act states that “Deployers of an AI system that generates or manipulates image, audio, or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.” (“Article 50: Transparency Obligations”). This definition proves helpful in understanding exactly what kind of content potential legislation in the United States should be targeting, and exactly who it should concern. There are further considerations to be made about the context in which this potential future legislation is implemented. Different mediums of advertising require different approaches, especially when you consider the differences between audio and visual disclosure. The Federal Communications Commission’s focus on political broadcast media demonstrates an approach that is specific, reasonable, and enforceable—all qualities that are paramount in defining legislation (Federal Communications Commission). The different kind of media and their specific implementations require some thought for those three qualities to be seen in potential legislation. For example, high-risk types of media require more obvious disclosure, especially in the case of something like deepfakes. Deepfakes are, of course, a higher risk factor for advertisements since they can show real people saying or doing things that might damage a brand or endorse another in some way that is deceptive. The specific way the disclosure requirements are implemented depends heavily upon context and has to be adequately handled by the administration in charge of implementation, which would be the Federal Trade Commission in this example. With there being currently proposed legislation relating to this very topic, it’s important to consider the scope limitations of these potential solutions. Legislation proposed or implemented by United States regulatory bodies currently lacks coverage for all aspects of advertising—the Federal Communication Commission’s proposed rule only covers AI-generated content in broadcasted political advertising, for example (Federal Communications Commission). This reveals a significant gap in consumer protection, and shows the currently proposed legislation is unlikely to cover consumers in most instances. With this considered, the current rate of implementation overall tracks with the EU AI Act’s proposal, as they began by focusing on high-risk concerns. The EU AI Act focuses on deepfakes in advertising as a whole, while the FCC’s proposal focuses on political advertisements. These are both high-risk and give precedent for phased implementation of tighter disclosure requirements, which can be built off the current proposals being worked on by United States regulatory bodies. In conclusion, the Federal Trade Commission must establish comprehensive disclosure requirements for AI-generated content in advertising to protect the general consumers’ right to make informed decisions about the content they engage with. As this paper has shown, existing precedent from the EU AI Act, the FCC’s proposed rules for political advertisements, and the FTC’s own ban on fake testimonials all demonstrate that transparency for AI-generated content in advertising is both necessary and feasible. When consumers cannot distinguish between AI-generated content and human-created content, and it would have affected their decision to engage with the advertisement itself, it compromises their trust and right to make an informed decision. The FTC must protect consumers from advertisers that have not self-regulated into disclosing this content. Now that precedent has been set both domestically and internationally, the time for comprehensive regulation has arrived. scrolling past might not be what they seem. Companies are using
AI to create them, and research shows we can't even tell the difference. But here's the crazy
part: when people find out an ad was secretly made by AI, they feel deceived
and their trust in the brand plummets. This means the use of AI is "material information"—something
that could actually change your mind about buying. The government should require a simple
label, like 'AI-Generated Content.' It’s not about stopping technology; it’s about basic
honesty and protecting our right to make informed choices.
240,909+ Short Videos
Created By Over 14,258+ Creators
Whether you're sharing personal experiences, teaching moments, or entertainment - we help you tell stories that go viral.