The Musings of Jaime David
The Musings of Jaime David
@jaimedavid.blog@jaimedavid.blog

The writings of some random dude on the internet

1,091 posts
1 follower

Tag: screen act

  • When Clippy Becomes a Symbol for the Internet We’ve Lost

    When Clippy Becomes a Symbol for the Internet We’ve Lost

    In the late 1990s and early 2000s, Clippy was a punchline. The animated paperclip, officially known as Clippit, would pop up in Microsoft Office to offer tips that were often irrelevant, unnecessary, or unintentionally hilarious. He became a symbol of intrusive, overenthusiastic technology—technology that meant well but didn’t always deliver. We rolled our eyes, we groaned, and we laughed about him. But now, decades later, Clippy has taken on an entirely different role. In 2025, Louis Rossmann, a well-known electronics repair technician and right-to-repair activist, launched a campaign urging people to change their profile pictures to Clippy. At first glance, it might seem like a quirky, internet-savvy joke. In truth, it’s a form of protest.

    Rossmann’s point is clear: technology, once designed to help users, is increasingly being built to control them. Clippy, for all his faults, had no ulterior motive. He didn’t mine your personal data, track your every move, or push you into buying a newer version of Office you didn’t need. His purpose was singular—help you write your letter, format your resume, or understand the software you were using. Today’s digital landscape is far from that innocence. The modern internet is full of systems designed not to help, but to manipulate, monetize, and surveil.

    The shift from help-first technology to profit-first technology is what Rossmann calls “enshittification,” a process where services degrade over time in the pursuit of revenue, control, and exploitation. The earliest versions of many platforms are user-focused—simple, intuitive, even joyful. Then monetization strategies kick in, algorithms begin to dictate user behavior, and features are locked behind paywalls or removed entirely. What was once a tool becomes a trap.

    And this isn’t just about the private sector. Governments around the world are increasingly stepping in with laws and regulations that, while often presented as protective measures, have the side effect—or perhaps the intended effect—of restricting freedoms online. The Kids Online Safety Act (KOSA) is one example. Framed as a way to shield children from harmful content, it requires platforms to exercise a “duty of care” to prevent a wide array of harms, from depression to bullying. On paper, it sounds noble. In practice, it’s dangerously vague. Who defines what “harmful” means? Civil liberties groups warn that KOSA could easily be used to censor important, even life-saving content, especially for marginalized groups like LGBTQ+ youth who rely on online spaces for support.

    The SCREEN Act, another U.S. proposal, takes it a step further by requiring mandatory age verification for websites deemed harmful to minors. That means handing over government IDs or other sensitive data to access vast portions of the internet. Privacy advocates are rightfully concerned—this isn’t just about protecting kids, it’s about reshaping the internet into a monitored, identity-verified space. It’s a short leap from there to an internet where anonymity is impossible.

    Across the Atlantic, the UK’s Online Safety Act has already gone into effect, bringing with it sweeping requirements for platforms to verify user ages and filter “harmful” content. Predictably, it has led to over-censorship, with platforms erring on the side of removing anything remotely controversial. News footage, political commentary, even educational resources have been swept up in the purge. Wikipedia fought the act in court, citing its privacy-focused, volunteer-driven model, but lost. The law is being phased in, and its full impact will be felt in the coming years.

    Even YouTube, the world’s largest video platform, is rolling out AI-powered age verification, set to expand beyond test users starting August 13, 2025. The system uses machine learning to guess your age based on viewing habits, search history, and account longevity. If it thinks you’re underage, it restricts your access to content and disables personalized ads. Get misidentified? You can appeal—but only by handing over a government ID, a credit card, or a facial image. Once again, we are forced to trade privacy for participation.

    And then there’s the Tea app controversy, a recent and sobering reminder of how fragile privacy really is. Marketed as a women-only dating advice platform, Tea promised safety and discretion. In July 2025, it suffered two massive leaks: first, 72,000 images—including selfies and government IDs—were exposed; then, just days later, over a million private messages were leaked. What was meant to be a sanctuary for vulnerable users became a goldmine for bad actors. Multiple lawsuits are underway, but for the people whose personal information is now out in the wild, no court victory can undo the damage.

    When you step back and look at the big picture, the Clippy campaign isn’t just a nostalgic joke—it’s a pointed commentary on what we’ve lost. Clippy may have been clumsy, but he embodied a philosophy of technology that was transparent and singular in purpose: to assist the user. There was no hidden monetization scheme, no mass data harvesting, no psychological profiling. Compare that to today’s tech landscape, where help is often the bait and exploitation is the hook.

    Rossmann’s protest asks us to consider: what kind of internet do we want? Do we want one where services are designed to empower, or one where every click is monetized and monitored? Do we want tools that are honest about their purpose, or tools that pretend to help while quietly extracting value from us?

    The legislation and policies being rolled out right now are not isolated events—they are part of a trend toward a more restrictive, less private, and less user-centered internet. And unlike Clippy, these changes aren’t something we can simply click away from. They’re structural shifts that, once in place, will be incredibly difficult to reverse.

    For creatives like me, this hits especially hard. The internet has been a place to share ideas, stories, and art without gatekeepers. It’s been a tool for connecting with audiences and communities across the world. But the more laws that demand age verification, the more platforms that demand personal data, and the more algorithms that decide what can be seen, the smaller that creative space becomes. It’s a slow suffocation of the freedom that made the internet exciting in the first place.

    Changing a profile picture to Clippy might seem like a small act, maybe even a silly one. But symbols matter. They can rally people around a shared concern, spark conversations, and make abstract issues feel tangible. Clippy’s big, googly eyes and awkward smile remind us of a time when technology was still, in many ways, on our side. By putting him in our profiles, we’re not just being ironic—we’re making a statement.

    We’re saying we miss when tech was built for us, not against us. We’re saying we refuse to quietly accept policies and practices that strip away our privacy and autonomy. And we’re saying that, even if the fight seems unwinnable, we won’t stop pushing back.

    The internet doesn’t have to be perfect to be worth defending. It just has to be ours.

    To check out Louis Rossmann’s video, you can find it down below.

  • When the Rules Change Overnight: What Content Creators Are Worried About

    When the Rules Change Overnight: What Content Creators Are Worried About

    As a content creator, I’ve come to accept that platforms change. Algorithms shift. Trends evolve. What worked one week might flop the next. But every now and then, something bigger comes along — something that makes us stop and wonder: Are we about to see the internet change in a major way?

    Lately, there’s been a lot of buzz around a new bill called the SCREEN Act. It’s a proposal in Congress aiming to prevent minors from viewing explicit adult content online. On the surface, that sounds reasonable — after all, no one wants kids exposed to things they’re not ready for. But the way the bill plans to do this is raising some eyebrows.

    What’s being proposed is a form of age verification that could dramatically affect how all of us — not just kids — interact with the internet. And as a creator, that makes me a little uneasy.

    Here’s why:

    • Who decides what content is considered “explicit” or “harmful” for minors?
      Definitions can be vague, and that leaves room for overreach. Could educational material, discussions about identity, or even art be swept up in this?
    • Will platforms react by tightening their rules across the board?
      We’ve seen this before — when one kind of content becomes risky, platforms often cast a wider net to avoid lawsuits or backlash. That puts pressure on creators to censor themselves or risk demonetization, shadowbanning, or even removal.
    • Could creators be held responsible for who views their content?
      We already do our best to label content and follow platform rules. But it’s hard to control who clicks, who watches, or how old someone says they are. Are we now expected to police that too?

    This isn’t to say we don’t need better protections for young users online. We absolutely do. But we also need to be careful about how those protections are written into law — and what that means for people who rely on the internet to create, educate, and express themselves.

    As someone who creates with care and intention, I worry about being caught in the middle. I’m not here to post shocking or harmful material — but I also want the freedom to speak honestly, to tell stories, and to reach the people who need to hear them. New laws and policies have the potential to change that balance overnight.

    Whether the SCREEN Act passes or not, it’s a reminder that content creators aren’t just posting for fun — we’re navigating a complicated, evolving digital space where the rules are rarely clear, and the stakes are often high.