The Musings of Jaime David
The Musings of Jaime David
@jaimedavid.blog@jaimedavid.blog

The writings of some random dude on the internet

1,091 posts
1 follower

Tag: KOSA

  • When Clippy Becomes a Symbol for the Internet We’ve Lost

    When Clippy Becomes a Symbol for the Internet We’ve Lost

    In the late 1990s and early 2000s, Clippy was a punchline. The animated paperclip, officially known as Clippit, would pop up in Microsoft Office to offer tips that were often irrelevant, unnecessary, or unintentionally hilarious. He became a symbol of intrusive, overenthusiastic technology—technology that meant well but didn’t always deliver. We rolled our eyes, we groaned, and we laughed about him. But now, decades later, Clippy has taken on an entirely different role. In 2025, Louis Rossmann, a well-known electronics repair technician and right-to-repair activist, launched a campaign urging people to change their profile pictures to Clippy. At first glance, it might seem like a quirky, internet-savvy joke. In truth, it’s a form of protest.

    Rossmann’s point is clear: technology, once designed to help users, is increasingly being built to control them. Clippy, for all his faults, had no ulterior motive. He didn’t mine your personal data, track your every move, or push you into buying a newer version of Office you didn’t need. His purpose was singular—help you write your letter, format your resume, or understand the software you were using. Today’s digital landscape is far from that innocence. The modern internet is full of systems designed not to help, but to manipulate, monetize, and surveil.

    The shift from help-first technology to profit-first technology is what Rossmann calls “enshittification,” a process where services degrade over time in the pursuit of revenue, control, and exploitation. The earliest versions of many platforms are user-focused—simple, intuitive, even joyful. Then monetization strategies kick in, algorithms begin to dictate user behavior, and features are locked behind paywalls or removed entirely. What was once a tool becomes a trap.

    And this isn’t just about the private sector. Governments around the world are increasingly stepping in with laws and regulations that, while often presented as protective measures, have the side effect—or perhaps the intended effect—of restricting freedoms online. The Kids Online Safety Act (KOSA) is one example. Framed as a way to shield children from harmful content, it requires platforms to exercise a “duty of care” to prevent a wide array of harms, from depression to bullying. On paper, it sounds noble. In practice, it’s dangerously vague. Who defines what “harmful” means? Civil liberties groups warn that KOSA could easily be used to censor important, even life-saving content, especially for marginalized groups like LGBTQ+ youth who rely on online spaces for support.

    The SCREEN Act, another U.S. proposal, takes it a step further by requiring mandatory age verification for websites deemed harmful to minors. That means handing over government IDs or other sensitive data to access vast portions of the internet. Privacy advocates are rightfully concerned—this isn’t just about protecting kids, it’s about reshaping the internet into a monitored, identity-verified space. It’s a short leap from there to an internet where anonymity is impossible.

    Across the Atlantic, the UK’s Online Safety Act has already gone into effect, bringing with it sweeping requirements for platforms to verify user ages and filter “harmful” content. Predictably, it has led to over-censorship, with platforms erring on the side of removing anything remotely controversial. News footage, political commentary, even educational resources have been swept up in the purge. Wikipedia fought the act in court, citing its privacy-focused, volunteer-driven model, but lost. The law is being phased in, and its full impact will be felt in the coming years.

    Even YouTube, the world’s largest video platform, is rolling out AI-powered age verification, set to expand beyond test users starting August 13, 2025. The system uses machine learning to guess your age based on viewing habits, search history, and account longevity. If it thinks you’re underage, it restricts your access to content and disables personalized ads. Get misidentified? You can appeal—but only by handing over a government ID, a credit card, or a facial image. Once again, we are forced to trade privacy for participation.

    And then there’s the Tea app controversy, a recent and sobering reminder of how fragile privacy really is. Marketed as a women-only dating advice platform, Tea promised safety and discretion. In July 2025, it suffered two massive leaks: first, 72,000 images—including selfies and government IDs—were exposed; then, just days later, over a million private messages were leaked. What was meant to be a sanctuary for vulnerable users became a goldmine for bad actors. Multiple lawsuits are underway, but for the people whose personal information is now out in the wild, no court victory can undo the damage.

    When you step back and look at the big picture, the Clippy campaign isn’t just a nostalgic joke—it’s a pointed commentary on what we’ve lost. Clippy may have been clumsy, but he embodied a philosophy of technology that was transparent and singular in purpose: to assist the user. There was no hidden monetization scheme, no mass data harvesting, no psychological profiling. Compare that to today’s tech landscape, where help is often the bait and exploitation is the hook.

    Rossmann’s protest asks us to consider: what kind of internet do we want? Do we want one where services are designed to empower, or one where every click is monetized and monitored? Do we want tools that are honest about their purpose, or tools that pretend to help while quietly extracting value from us?

    The legislation and policies being rolled out right now are not isolated events—they are part of a trend toward a more restrictive, less private, and less user-centered internet. And unlike Clippy, these changes aren’t something we can simply click away from. They’re structural shifts that, once in place, will be incredibly difficult to reverse.

    For creatives like me, this hits especially hard. The internet has been a place to share ideas, stories, and art without gatekeepers. It’s been a tool for connecting with audiences and communities across the world. But the more laws that demand age verification, the more platforms that demand personal data, and the more algorithms that decide what can be seen, the smaller that creative space becomes. It’s a slow suffocation of the freedom that made the internet exciting in the first place.

    Changing a profile picture to Clippy might seem like a small act, maybe even a silly one. But symbols matter. They can rally people around a shared concern, spark conversations, and make abstract issues feel tangible. Clippy’s big, googly eyes and awkward smile remind us of a time when technology was still, in many ways, on our side. By putting him in our profiles, we’re not just being ironic—we’re making a statement.

    We’re saying we miss when tech was built for us, not against us. We’re saying we refuse to quietly accept policies and practices that strip away our privacy and autonomy. And we’re saying that, even if the fight seems unwinnable, we won’t stop pushing back.

    The internet doesn’t have to be perfect to be worth defending. It just has to be ours.

    To check out Louis Rossmann’s video, you can find it down below.

  • Age by Algorithm: Why YouTube’s New AI Age Checks Raise Big Questions for Creators and Viewers Alike

    Age by Algorithm: Why YouTube’s New AI Age Checks Raise Big Questions for Creators and Viewers Alike

    As creators, we know that the digital landscape is constantly evolving — new tools, new guidelines, and yes, new rules about who can see what and when. YouTube’s latest move? Using artificial intelligence to guess a viewer’s age, not based on their birthday, but on their behavior.

    That’s right. YouTube recently announced that it’s rolling out an AI-powered age detection system in the U.S. This system will estimate whether a user is over or under 18 by looking at what they watch, what they search for, and how long they’ve had their account — regardless of the birthdate they entered.

    For creators, this raises a lot of questions.

    1. Will our videos reach the intended audience?
    If someone is misclassified as a minor, they might be automatically excluded from seeing our content — even if it’s not inappropriate. That means creators could lose out on engagement, visibility, and potential revenue due to something as abstract as an algorithmic guess.

    2. What happens if the system gets it wrong?
    The burden falls on users to prove their age with a credit card, government ID, or selfie. This isn’t just a hassle — it’s a potential privacy concern, especially for users who don’t feel comfortable sharing such personal data online.

    3. What about nuance?
    Not all content is clearly “for kids” or “for adults.” Sometimes, it’s educational. Sometimes, it’s artistic. Will AI understand the difference? Or will creators start censoring themselves to avoid being caught in the system’s net?

    This rollout comes on the heels of broader regulatory trends — like the Kids Online Safety Act (KOSA) and the UK’s Online Safety Act — which aim to protect minors online. And while those goals are important, creators and digital users alike are increasingly worried that the methods used to “protect” may lead to overreach, mistrust, or unintended harm.

    YouTube says this approach has worked well in other countries and will be tested with a small group of U.S. users first. But even so, it’s important for us — as creators, viewers, and digital citizens — to pay attention. AI isn’t perfect. And when it’s used to gatekeep access, influence algorithms, or reshape who sees our work, the stakes are higher than ever.

    Let’s keep the conversation going. Let’s stay informed. And most of all, let’s advocate for smart solutions that protect young users without punishing creativity, curiosity, or community.