AI legislation is moving at the state and federal level in ways that will affect how service businesses can use AI in marketing, hiring, and customer interactions. Here's a plain-English briefing as of late April 2026.
Ido Cohen · Published 2026-04-24 · AI News
The Transparency Coalition published its AI Legislative Update on April 24, 2026, summarizing the state of AI bills across the US Congress and state legislatures. For service business owners, the relevant takeaway is not which specific bills passed — most of them will not directly regulate a 50-employee service business — but which categories of AI use are now under active legal scrutiny and how that should shape your operational decisions.
Here is the briefing.
A few notable items in late April:
Three categories of AI use where the legal floor is rising:
1. AI in customer interactions (voice, chat, email). Disclosure requirements are tightening. The safe operational standard is now to disclose at the start of any AI-mediated interaction that the customer is interacting with an AI system. This can be done naturally ("Hi, I'm an AI assistant for ABC Plumbing — how can I help?") and does not hurt conversion when handled well. Failing to disclose risks consumer protection complaints that are increasingly being acted on.
2. AI in hiring decisions. If you use AI to screen resumes, schedule interviews, or score candidates, you are now in a regulated category in several states. The minimum standard is: disclose AI use to candidates, retain records of how the AI was used, allow human review on request. If you are doing more than basic resume parsing, get specific legal guidance.
3. AI-generated marketing content. FTC guidance on AI-generated testimonials, before-and-after images, and influencer content is sharpening. The standard: AI-generated content that could mislead consumers about a product or outcome must be clearly disclosed. AI-stylized photos of your own work are generally fine. AI-generated images implying outcomes you cannot deliver are not.
Three concrete actions:
1. Audit your AI customer-touchpoints for disclosure. List every place a customer might interact with an AI on your behalf — voice agent, chatbot, email auto-responder, intake form scoring. For each one, confirm there is a clear disclosure that the customer is interacting with AI. Update where missing. This is a 1-2 hour task and removes the most common legal exposure.
2. Review your AI-generated marketing assets. If you use AI to generate hero images, before/after photos, social media graphics, or video content, confirm none of them imply outcomes or capabilities you cannot deliver. If you use AI-generated testimonials or reviews in any way, stop. The legal exposure on synthetic reviews is significant and growing.
3. Document your AI use. For your own records, write a one-page "AI in our business" document that lists every AI tool you use, what it does, where it touches customers or employees, and what disclosures are in place. This is not currently required but it is the document you will need if a regulator ever asks. Building it now while the list is short is much easier than trying to reconstruct it later.
The federal picture is more uncertain than the state picture. Congress has not yet passed a comprehensive AI law. Most federal action is happening through agency rulemaking — FTC, EEOC, and CFPB are all active. The likely path is sector-specific regulation rather than a sweeping AI Act.
For service businesses, the key federal area to watch is FTC enforcement on deceptive AI marketing. The FTC has been clear that it views misleading AI-generated content as the same category as any other deceptive marketing — meaning the existing FTC Act applies and they do not need new authority to act on it.
US service businesses without EU operations are largely insulated from the EU AI Act. The exception is if you serve EU customers through a website or take EU customers as part of your business. In those cases, the EU AI Act's transparency and high-risk AI obligations may apply. The August 2, 2026 deadline for high-risk AI obligations is approaching. If you have any EU exposure, get specific guidance now.
For US-only service businesses, the EU AI Act is a useful preview of where US regulation is heading more than an immediate compliance issue. The categories of AI use the EU is treating as high-risk (hiring, biometrics, credit, customer profiling for material decisions) are the same categories US regulators are focusing on.
The legal floor for AI use in service businesses is rising slowly, not catastrophically. The companies that will get caught off guard are the ones using AI invisibly across customer touchpoints with no documentation, no disclosure, and no plan for the moment a customer or regulator pushes back. The companies that will be fine are the ones that disclose AI use clearly, keep simple records, and avoid using AI to imply outcomes they cannot deliver.
Most of this is operational hygiene. It is not exotic. The cost of doing it now is small. The cost of having to do it under pressure later is much larger.
Do US AI regulations actually apply to small service businesses?
Most federal AI regulations target large platforms or specific high-risk applications (hiring, credit, healthcare). Small service businesses are largely outside direct regulation. The exceptions: AI used in hiring decisions (state laws apply in CA, NY, IL, CO), AI customer service that does not disclose AI use (consumer protection complaints), and AI-generated marketing content that misleads consumers (FTC enforcement).
What's the safe disclosure standard for AI customer interactions?
Disclose at the start of any AI-mediated interaction that the customer is interacting with an AI system. This can be natural ("Hi, I'm an AI assistant for ABC Plumbing — how can I help?") and does not hurt conversion. Failing to disclose risks consumer protection complaints that are increasingly being acted on at the state level.
Can I use AI to generate testimonials or before/after images for marketing?
AI-generated testimonials are a legal red zone — assume they are not allowed under FTC standards even when not explicitly prohibited. AI-stylized photos of your own real work are generally fine. AI-generated images implying outcomes you cannot deliver are not. The safe rule: AI-generated content that could mislead consumers must be clearly disclosed.
Does the EU AI Act affect US service businesses?
Only if you serve EU customers through your website or take EU customers as part of your business. For US-only service businesses, the EU AI Act is a useful preview of where US regulation is heading — particularly around high-risk uses like hiring, biometrics, and customer profiling for material decisions — rather than an immediate compliance issue.
Sources: