Max Greenwood
Washington and state MPs are working through a new presidential administration, through their legislative sessions, the political world is working for major changes in AI regulation, data privacy laws and campaign messages in 2025.
Julie Sweet, director of advocacy and industry relations of the American Association of Political Consultants, sat with C&E this year to convince some of his major concerns and priorities.
This interview is edited for length and clarity.
C&E: I want to start with a really big picture. DC has a new administration. At present, FEC has four active members. The states including Texas are in their legislative sessions. What are you seeing this year? What are you telling political professionals for this year?
Julie Sweet: Three main areas that I am paying attention to – I think both strategy, strategy and compliance are going to have significant impact – AI, data are in the areas of privacy and political messages, which we usually talk about SMS texting and other calling tools.
A complex is, it is lightly, and to insert an uncertain regulator environment. There is not a ton of federal action we see or expect to look, especially around AI. In the context of national policy, some fit and data have begun around privacy. And when it comes to SMS, we have (Telephone Consumer Protection Act).
But we are looking at the states like taking lead in the first two, and then expanding on TCPA requirements … I think we have 19 states which have the rules of the state around the use and or labeling of AIs. Are. We have many states that have comprehensive data privacy rules and broker registration requirements. And what it does it is actually difficult from a compliance point of view, because you say that you work in many states, then you have found data privacy requirements and/or a group of different variations in AI. What we are seeing is in fact, not only from regulators and legislators, but also from platforms. Platforms have wild deviation types of policies, and then developers themselves.
We have some examples that say that there is no political use case. Open AI is a good example. What does this mean and how is it applied and implemented? Does the flag become? Doesn’t the flag not get? Therefore, we are actually trying to achieve clarity and prediction in states and platforms. And this is important, not only from an operational compliance perspective, but also strategy and strategy and how they can use these devices and deploy them.
And then the third thing is: What are moral practices and how do we balance those moral standards with free speech and innovation? AAPC firmly opposed the use of misleading AI-made materials. But we have to be accurate to stop the valid campaign speech and responsible AI use to stop. I think there is a conflict between AI-related material and deep fake. And when we require AI for Deepfac – because ‘deep’ refers to deep learning – you can also create very compelling media from Photoshop and other digital tools. And so you should be both regulators and from the moral point of view, in fact it should be brief in what it is. And make sure that we are not attracted to all things that can be a digital device that we are looking in states. ,
C&E: I want to dig the issue of AI a little more. There are a lot of laws this year and I think Texas is probably standing out as a big … is there any special or especially any bill you are watching and talking to the professionals of the industry Are?
Sweet: You have found Nevada as another example, where the State Secretary is a major proposer to his (AI-AI-AI content) labeling bill. It will also include keeping special material on file or record which is accessible to the public. I think we are now monitoring more than a dozen bills. We have 19 states that already have laws on books, most of which passed in 2024, and simply a handful – California, Texas, Minnesota, I think – they already had them.
But I think what we see is those that require some form of disconnection or labeling. And they differ widely in the language that requires. Therefore, some people say ‘some use of synthetic media’, some say ‘manipulated by Ai’, some say ‘generated by AI’. And what we know from research … is that language actually matters in terms of how people get information. And so we want the MLAs to be really thoughtful about disconnections and only real estate on materials. Correct? If you need a certain size, it takes a significant percentage. There are some audio disconnections that require about four-second disconnection, which will be added for stand-by-AD disconnection. So this is eight seconds of a 30 seconds of AD, right? It is problematic.
In addition, we are actually concerned about punishment. Private rights of action and which can sue it as a result, is something that we really see closely, as well as fines. I think I have to go back and find a kingdom for you, but I think a state (says) that if you use a deep fake advertisement, you can no longer advocate the Legislature and you The ballot is closed, which is a very difficult punishment.
The second big thing is that we pay attention to how they define ‘deepfec’, how they define ‘synthetic media’ and how they define ‘digital tools’, because these bills There are numbers and many laws that are currently in the present. Books that would call it ‘any digital tool’. And so it includes Photoshop? So we really try to see the definition. The goal here is to really standardize. But a part of the challenge is that some of these bills have been tested through litigation and enforcement, so I think that creates a lot of uncertainty among doctors. How are they going to apply? (State’s Attorney General) How aggressive is?
C&E: You touched it a little earlier. AAPC is very clear that it does not support the misleading use of AI. This does not support misinformation. But this is still a relatively new technology. Is there any concern that some of these laws can prevent creativity and innovation and basic free speech rights? How do you see the industry balanced right now?
Sweet: I think AAPC members are responsible actor. There are many ways to deploy AI. Any AI content or AI tool requires a lot of public education for such conflict with deepfec, which AAPC has dedicated itself. But then, it goes back to that legislative and legal language … Is it trying to go on a deep fake or is it just trying to regulate the equipment? And when you want to regulate the tool, it is trying to regulate the printing press, right? So this is what we really want to see. How do we get a system of transparency and accountability for the material that we are being made, (but) does not make this patchwork of compliance?
How is it that we can ensure that good actors, responsible actors are empowered to use and innovate these devices? I have said on some different occasions: I think the challenge is, if the industry is not able to innovate due to these compliance challenges and it does not understand how the regulatory landscape is going to play, then these devices Becomes a disruptive to adopt. And if those people who are not responsible actor – those who are not going to label their content, those who want to run misinformation (or) disintegration – are using these devices, then good actors immediately harm the good actors Is put in , It will not be so efficient. We will not be able to be as motivational, right? This material will not be compelling, and we will not be able to personalize it in the same way that bad actors can do. And remember, the courts are very clear that you cannot regulate this material. I think this is something that we are going to see as much as possible. Are these restricted materials? We need to empower counterspritic and ensure that counterparts may be effective as initial incorrect information speech.
C&E: I want to turn to social media here for a second, because I think this year we saw that a big development meta was changing some of its content of its material moderation policies. Is there any other platform that you are seeing or should the industry be seen, where we can see some changes this year?
Sweet: I think it’s too early to tell this. This continues to move and change it not only because of the new administration and a new Congress and being compatible with the people for a new federal regulatory environment. The Trump administration has made it very clear that they do not want any regulation or obstruction on AI devices. But I think that when I talk about meta and other material policies, I am most concerned about how those policies work in practice? And are they being explained with transparency and stability and stability?
For example, one of the challenges is that when Facebook decides to close the advertisement paid only a week before the election, regardless of the real results. Is the policy that they are implementing is actually serving the goal they are expecting to serve? Because again, if you discontinue the advertisement paid for a week from the election, leave that kind of organic without the ability to respond in an effective way in your own small universe.
I think the thing we know – what we can know – is that it is going to change constantly.