Generative AI: Not As Unprecedented or New as You Think for Regulation - Tech Digital Minds
This perspective is part of a series of provocations published on Tech Policy Press in advance of a symposium at the University of Pittsburgh’s Communication Technology Research Lab (CTRL) on threats to knowledge and US democracy.
OpenAI CEO Sam Altman speaks during the US Federal Reserve Board of Governors’ “Integrated Review of the Capital Framework for Large Banks Conference” at the Federal Reserve in Washington, DC, on July 22, 2025. (Photo by MANDEL NGAN/AFP via Getty Images)
Last month, OpenAI unleashed Sora 2, a new generative AI technology capable of transforming ordinary inputs into strikingly realistic media. With its “cameo” feature, users can create videos that convincingly mimic someone else’s speech and actions, raising alarms over digital identity theft through sophisticated deepfake technology. Despite feeble anti-impersonation safeguards, thousands of deepfakes have already surfaced, both consensual and non-consensual. The rapid dissemination of such harmful content serves as a stark reminder of the pervasive societal issues associated with generative AI.
While the technological advancements in generative AI are extraordinary, they are layered upon existing harms that have been present in the tech landscape for years. Since the inception of open-source tools five years ago, deepfakes have been weaponized for disinformation campaigns, fraud, and various societal disruptions—from electoral manipulations to corporate scams. These challenges, driven largely by social media platforms and algorithm-driven content distribution, are amplified in intensity and speed but are not fundamentally new in nature.
Policymakers have struggled to respond adequately to these rapidly evolving technologies, facing confusion and uncertainty regarding their implications. What follows is a haphazard patchwork of state-level regulations and non-binding commitments from tech firms that stifle proactive measures. Due to the opacity and unpredictability of generative AI’s capabilities, a clear framework for accountability remains elusive.
One major source of confusion stems from the term “AI” being conflated with “Artificial General Intelligence” (AGI), a concept still largely theoretical. Several tech companies are racing towards achieving AGI, leading to a misplaced urgency that hampers the timely regulation of existing generative AI technologies. In reality, today’s AI primarily augments human tasks across regulated domains. Thus, these technologies should be subject to the same safety and product-liability regulations that govern other high-risk advancements.
Technology monopolies perpetuate the narrative that AI’s complexity and breadth render it unregulatable, a misconception that needs unraveling. While policymakers and regulators often grapple with these misconceptions, it’s crucial to deconstruct AI technologies into their individual components. For example, language models (LLMs) and image/video generation tools each have unique applications and should be treated under industry-specific regulations rather than blanket approaches.
Another tactic used by tech giants is the argument that stringent regulations will stifle innovation. This was evident in the Biden-Harris Executive Order that leaned heavily on voluntary commitments from the industry, which have proven largely ineffective. Warnings about hindering progress often overshadow the fact that historically, governmental frameworks have effectively fostered innovation while ensuring safety. For instance, the FCC’s 2024 ban on AI-generated robocalls indicates the potential of targeted, legally-binding regulations to address real issues.
Moreover, the discourse surrounding content authenticity and provenance often migrates towards a tangled debate over free expression. This creates loopholes that allow tech companies and platforms to evade responsibility for the dissemination of misleading or harmful content. Instead, discussions around consent and intention in digital media should take precedence, fostering understanding and control over the kind of content circulating online.
The concept of consent must be central to online narrative development: if an individual identifies as a subject in a synthetic image or video and demands its removal, that right must be actionable. Meanwhile, the intent behind a piece of content—whether intended for advertising or circulating political misinformation—should delineate appropriate regulatory pathways. Transparency around AI-generated content is crucial, yet many platforms fail to meet even basic labeling requirements that would inform users about the synthetic nature of the media they encounter.
As much as certain elements of AI technology are progressive, the harms tied to its deployment are merely evolutions of existing societal issues. The actual risk lies in our collective refusal to engage in meaningful conversations about regulation and accountability. Recognizing generative AI as a software product requiring standards and safety protocols can lead to significant progress in mitigating its risks. It’s high time we dismantle the myth of the “unprecedented,” taking definitive steps toward recognizing, assessing, and holding tech companies accountable for their innovations.
The Importance of Customer Reviews in Software Purchases It's no secret that customer reviews play…
 Have you ever wished you could replicate a complex…
The Democratization of Cybersecurity: Navigating AI-Enhanced Cyber Threats We are witnessing something unprecedented in cybersecurity:…
The Top 5 CPG Tech Trends Shaping 2026 By Lesley Salmon, Global Chief Digital &…
Must-Have Tech Gadgets for Your Life In the fast-paced world we live in, staying connected…
AWS Security Agent: Ushering in a New Era of Application Security As part of its…