Depending on where you stand, Amazon v. Perplexity is either a defense of platform safety or a power play to police competition; a principled stand against unauthorized access or a convenient shield for ad revenue; a necessary curb on misrepresentation or an overreach that treats user delegation as trespass. Critics see Amazon hardening platform feudalism under the banner of security, while skeptics of Perplexity call Comet’s human-mimicking behavior growth-hacking with a legal fig leaf. The provocation is simple: if Amazon prevails, terms of service become a de facto veto on user autonomy and tool choice; if Perplexity prevails, masquerading bots could be normalized inside credentialed spaces. Neither end state is comfortable, and the CFAA undertone raises the stakes from product policy to potential criminal exposure. Disclosure is the flashpoint—was this an agent hiding in plain sight, or a user’s own will extended through software?
The more surprising conclusion is that both narratives point to the same destination: agents will need their own identity, not borrowed human skins. Expect a fast migration toward explicit agent credentials, attested “agent headers,” least-privilege scopes, and auditable trails—an identity and conduct layer for autonomous software, akin to PCI for payments or SPF/DMARC for email. Platforms will carve out agent-only lanes with rate limits, capability gates, and revocation; regulators will bless disclosure and delegated authority as the lawful path; and developers will discover that winning isn’t bypassing rules but integrating with them. In that future, “clicks” give way to intent contracts: users authorize outcomes; agents prove who they are and what they did; platforms price access to capabilities, not eyeballs. Paradoxically, whichever side wins in court, the market settles on the same norm—transparent, scoped, and observable agency—because it’s the only equilibrium that preserves user choice without sacrificing platform trust.