The Musings of Jaime David
The Musings of Jaime David
@jaimedavid.blog@jaimedavid.blog

The writings of some random dude on the internet

1,099 posts
1 follower

Category: news

  • The Rise of a New Facebook Scam: The Brain Game Image Trick and the ‘BE CV BK 2025 -R-D’ Message

    The Rise of a New Facebook Scam: The Brain Game Image Trick and the ‘BE CV BK 2025 -R-D’ Message

    Scammers are always reinventing their tactics, and Facebook is often the testing ground for their newest schemes. Recently, a peculiar type of scam has started to appear on the platform, and it’s catching many users by surprise. On the surface, these posts look harmless: a colorful brain game puzzle, the kind of post designed to get people to pause, think for a moment, and maybe share or comment their answers. But attached to these posts is something strange—an odd string of text that looks like a cryptic code. It usually appears right before the puzzle image, reading something like:

    BE CV BK 2025 -R-D BE CV BK.2025 -R-D

    At first glance, this might seem like nonsense. Some people might assume it’s a typo, others might think it’s part of the puzzle, and others still might ignore it altogether. But that strange text is not random, and the brain game image is not as innocent as it seems. These posts are being used by scammers as bait, and the bizarre text acts as a marker for their scheme. After interacting with the post, many users are soon contacted on Facebook Messenger by a scammer using a business account.

    This essay will unpack how the scam works, why the text is significant, and what the ultimate goal of the fraudsters is. More importantly, it will explore why this scam has become effective, what Facebook’s role in allowing it to spread might be, and how users can protect themselves.


    The Setup: Puzzle Posts as Bait

    Facebook has always been filled with puzzle and quiz posts. They thrive because they’re easy to engage with, spark curiosity, and don’t seem dangerous. A riddle or IQ test feels harmless compared to a link promising free money or a too-good-to-be-true offer. Scammers have realized this, and that’s why they’ve begun using these posts as the entry point for their schemes.

    The difference this time is that the text right before the image—BE CV BK 2025 -R-D—sets these posts apart. It’s a deliberate addition, not a mistake.


    The Strange Text Before the Image

    Unlike scams that hide malicious links inside images, this one places the odd message in plain sight, right before the puzzle picture. This string of text doesn’t appear to lead anywhere or mean anything, but it serves several subtle purposes.

    1. It draws curiosity. People naturally want to know what the random letters and numbers mean. Some might even comment asking about it, which boosts the post’s engagement.
    2. It serves as a scammer’s tag. By inserting the same text in every post, scammers can track their work. Searching the string on Facebook brings up all the active scam posts, allowing them to monitor and manage the campaign.
    3. It marks posts for connection. Other scammers or automated accounts know which posts are part of the scam network. It’s like a digital signature to signal “this is bait.”

    The placement is also intentional. By putting the text right before the brain game image, scammers make it look almost like part of the puzzle itself, tricking some users into interacting more than they normally would.


    What Happens Next: The Messenger Message

    Once someone comments, likes, or otherwise engages with the post, scammers take the next step. A message arrives in Facebook Messenger, but not from a regular profile. Instead, it comes from a business account.

    This detail matters. Facebook allows business pages to message individuals even if they aren’t friends. Scammers exploit this to bypass normal restrictions and make their message look official or professional. To the average user, a message from a business might seem safer or at least more legitimate than one from a random personal account.

    The message itself varies, but it usually attempts one of the following scams:

    • Phishing: Asking you to click a link to “claim a prize,” “verify your account,” or “solve the puzzle answer.” These links lead to fake login pages that steal your credentials.
    • Fake Jobs: Offering too-good-to-be-true “work from home” opportunities that require upfront fees.
    • Investment Scams: Promising to double or triple your money through crypto or trading schemes.
    • Social Engineering: Trying to build trust through conversation, eventually leading to financial or personal data requests.

    The puzzle post was never the scam itself—it was the lure to get you into the Messenger trap.


    Why This Scam Works

    This scam succeeds because of a mix of psychology and platform design.

    • Harmless disguise: A puzzle looks innocent. People associate it with fun and intelligence, not danger.
    • Curiosity factor: The odd text feels like a mystery that begs for an explanation.
    • Legitimacy by design: Business accounts on Messenger look official, which lowers suspicion.
    • Algorithm boost: Facebook prioritizes posts with engagement, so the more people comment on the puzzle, the more the post spreads.

    Scammers thrive on exploiting these cracks in human behavior and platform systems.


    The Broader Context of Facebook Scams

    The “BE CV BK 2025 -R-D” scam is just the newest iteration of an old trick. Scammers constantly rotate their methods—fake celebrity news, shocking videos, chain letters, and now puzzle posts. The goal is always the same: lure, hook, exploit.

    Each new scam teaches scammers something about what works. In this case, they’ve learned that people trust puzzle content, engage with cryptic text, and rarely suspect business pages of foul play. It’s a perfect storm.


    Protecting Yourself

    Awareness is the first line of defense. Here are some ways to avoid falling for this scam:

    1. Ignore strange codes before images. If you see text like “BE CV BK 2025 -R-D” before a puzzle, don’t engage.
    2. Be wary of unsolicited business messages. Unless you sought out the business yourself, treat cold messages as red flags.
    3. Never click strange links. If someone sends you a link claiming it’s tied to the puzzle, don’t trust it.
    4. Report suspicious posts. Use Facebook’s tools to report both the post and the business page.
    5. Keep your account secure. Use two-factor authentication and strong passwords.

    Why Facebook Needs to Do More

    While users can and should protect themselves, Facebook has responsibility here. Allowing scammers to spread identical text strings across dozens of puzzle posts shows that the platform isn’t catching obvious patterns. Worse, the misuse of business accounts to cold-message individuals is a glaring loophole.

    Facebook could address this by:

    • Automatically flagging repeated unusual text patterns.
    • Limiting unsolicited business messaging privileges.
    • Investing more in scam-detection teams and AI moderation.

    Until they do, scams like this will continue to thrive.


    The Human Side of Scams

    It’s easy to look at scams only in terms of money lost, but the psychological impact is just as damaging. People who fall for scams often feel embarrassed, ashamed, or distrustful afterward. Some don’t even report what happened because they feel like they should have “known better.”

    But scams like this prove that anyone can be fooled. The design is subtle, the approach is polished, and the manipulation plays on universal human traits like curiosity and trust. Speaking out about scams, sharing warnings, and normalizing the fact that victims are not stupid is crucial to disrupting this cycle.


    Conclusion: A Puzzle with a Dark Answer

    The Facebook brain game scam that features the odd string of text—BE CV BK 2025 -R-D—isn’t just another spammy post. It’s a carefully designed funnel, starting with harmless-looking puzzles and ending in exploitative Messenger conversations. The strange text before the image is a signal: it marks the post as bait and helps scammers filter and track their victims.

    In the end, this scam is another reminder of how creativity and deception go hand in hand in the world of online fraud. For users, the lesson is clear: stay skeptical, question the unusual, and don’t assume that something that looks fun or harmless really is. For Facebook, the challenge is to finally step up and close the loopholes that allow scams like this to spread unchecked.

    Until then, the best defense is awareness—because in the case of this “puzzle,” the real answer is that it’s not a game at all.

  • Google’s New Policy and the Future of Writing, Reading, and Creative Apps

    Google’s New Policy and the Future of Writing, Reading, and Creative Apps

    Writing and creativity have always thrived when access to tools and stories is open. From the printing press to the rise of self-publishing, every leap in technology has expanded who can create and who can read. In our modern age, smartphones and tablets are the newest printing presses, the newest notebooks, the newest bookstores. They hold writing apps, self-publishing platforms, e-readers, and countless tools for creativity.

    But what happens when access to these tools is restricted? That’s the concern raised by Louis Rossmann, a well-known tech activist who recently criticized Google’s new policy. Under this change, developers who want to distribute apps outside the Google Play Store must now register, verify their identity, and pay a fee. Google also warns that apps installed outside their store are “50 times more likely” to contain malware.

    On the surface, this might seem like a reasonable safety measure. But for writers, readers, and creatives, the consequences could be severe.

    Smartphones as Creative Libraries

    Rossmann reminds us that smartphones aren’t “just phones.” They are computers, and for many, they are also libraries, notebooks, and publishing platforms. Writers use them to draft stories, poets use them to jot down lines on the go, and novelists use apps to organize entire worlds. Readers use apps to access books, from mainstream bestsellers to indie gems that never see the shelves of a chain bookstore.

    The beauty of writing apps and e-reading platforms is their variety. Some come from big companies, but many are built by small developers or independent writers who want to share their work. These creators may not have corporate backing, but they bring diversity and innovation to the literary world.

    Barriers for Indie Authors and Developers

    Under Google’s new policy, independent developers face new obstacles. Imagine a self-published author who has built a free app to share their short stories. Or a small team that develops a poetry journaling app. Or a startup offering an experimental e-reader focused on indie literature.

    Requiring fees and verification creates financial and bureaucratic barriers that many small creators can’t easily overcome. Some may abandon their projects altogether. That means fewer tools for writers and fewer platforms for readers.

    In other words, the policy risks silencing voices that don’t come from big publishing houses or tech companies.

    The Language of Fear: “Sideloading”

    Google’s use of the term “sideloading” is also troubling. The word frames independence as danger. For many readers and writers, some of the best creative apps come from outside the Play Store: apps that allow access to banned books, open-source writing tools, or experimental publishing platforms.

    If users hear that these apps are “unsafe,” they may avoid them entirely. That not only hurts developers, but also weakens the culture of independent literature and creativity.

    Access to Books at Risk

    Consider how many readers today find books through apps, especially those outside mainstream bookstores. Many independent authors distribute their work through alternative e-reading platforms, some of which aren’t hosted on the Play Store. Others rely on small-scale apps to reach audiences that traditional publishing overlooks.

    If those apps become harder to install—or if users are scared away by warnings—access to books shrinks. And when access shrinks, creativity suffers.

    Writing Apps and Education

    Writing isn’t just about publishing books—it’s also about learning. Students use apps to practice creative writing, journaling, and poetry. Teachers use small, independent apps to encourage storytelling in classrooms. Many of these apps are made by educators themselves, without the budget or corporate support to easily navigate Google’s new requirements.

    If these tools disappear, the next generation of writers loses opportunities to explore their voices.

    Creative Independence and Digital Control

    Rossmann warns that this isn’t just about phones—it’s about control. If companies can decide which apps are “safe” enough to install, they hold the keys to creativity itself. Today it’s Android apps; tomorrow it could be software on laptops or e-readers.

    For writers and readers, this is a chilling prospect. The act of writing has always been tied to freedom: freedom of thought, freedom of expression, freedom of access. Restricting how apps are installed means restricting how stories are shared.

    Why It Matters for the Arts

    Some may argue that writers can always publish in books or online blogs. That’s true—but apps are increasingly important for reaching readers. Apps can offer interactive storytelling, poetry generators, or book clubs with built-in discussion features. They can connect readers and writers across the world instantly.

    Restricting these platforms risks narrowing the ways in which stories can be told. Literature doesn’t only belong on shelves—it belongs everywhere, in every form technology allows.

    Conclusion: Protecting Creative Freedom

    Rossmann’s critique highlights something bigger than a software policy. It’s about the future of creativity in a digital world. Writing and reading have always expanded when barriers fall. Google’s new rules build new walls—and those walls may keep out the very voices that literature most needs.

    Smartphones are more than phones—they are libraries, notebooks, and printing presses. Writers and readers deserve the freedom to install the apps that inspire them, without unnecessary gatekeeping.

    If we value creativity, we must also value digital freedom. The future of writing depends on it.

  • French Streamer Jean Pormanove’s Wacky Online Adventures Come to a Sudden Pause

    French Streamer Jean Pormanove’s Wacky Online Adventures Come to a Sudden Pause

    In a story that reads like a cautionary tale from the land of the internet, a French streamer known as Jean Pormanove—real name Raphaël Graven—recently “took an unexpected nap” during a live broadcast. He was 46.

    Jean had gained fame for participating in all sorts of on-camera tomfoolery and wacky challenges, many of which were designed to make people giggle online. Unfortunately, some of those pranks went a little too far, and he found himself the target of relentless teasing and digital shenanigans for several months.

    On Monday, while streaming from his cozy home in Contes, a small village north of Nice, Jean fell asleep permanently. Authorities are looking into the situation, but at this stage, nothing fishy has been found. Interviews are underway, and a formal check-up is planned to understand exactly what happened.

    Previously, a couple of his co-hosts had been investigated for their role in some of the rough-and-tumble antics, but they were released without charges. French digital minister Ciara Chappez described the ordeal as “an absolute pickle of a situation” and sent her heartfelt sympathies to Jean’s family and friends.

    Meanwhile, the streaming platform Kick is reviewing the whole affair to make sure the online playground stays safe for everyone. A spokesperson said, “We are deeply saddened by Jean’s passing and send our warmest thoughts to his family, friends, and fans.”

    In the end, it’s a story that reminds us all: while online fun can be a barrel of laughs, it’s always important to keep the pranks playful and the teasing gentle—because sometimes the digital circus can get a little too bouncy.

  • Who Cares If Mutahar Lied About Being an Engineer? Seriously.

    Who Cares If Mutahar Lied About Being an Engineer? Seriously.

    Let’s cut through the noise: Mutahar, aka SomeOrdinaryGamers, got “exposed” for not actually being a licensed computer engineer after years of calling himself one online. And the internet, true to form, immediately exploded with outrage, memes, and finger-pointing. But here’s the thing—

    A scene from an animated show depicting a character, Peter, in a classroom raising his hand and expressing indifference with the phrase 'Who the hell cares?' while surrounded by fellow students.

    This isn’t some scammer who conned people out of thousands with fake credentials. This isn’t someone operating on people’s brains without a medical license. Mutahar isn’t building bridges or managing nuclear plants. He’s a content creator on YouTube talking about tech, games, deep web oddities, and digital privacy. At worst, he’s guilty of résumé fluffing — the kind of thing half the internet does every day.

    Yes, in Ontario, calling yourself an “engineer” without a license is technically a legal issue. But that’s not the same as lying to exploit people. This isn’t a criminal fraud case. This is just a guy who oversimplified or exaggerated his background to lend credibility to his commentary. And the reason anyone believed him wasn’t because of the title — it was because his work and knowledge backed it up.

    And that’s the key here: Mutahar has never faked his skills.

    Over the years, he’s proven time and again that he knows what he’s talking about. From breaking down malware and cybersecurity risks to calling out shady behavior in the tech world, his track record of solid, insightful content speaks for itself. Whether he learned that from a job, college, a bootcamp, or his basement — who cares? The results are there. His audience didn’t stick around because he said “engineer” — they stuck around because he delivers.

    And let’s go further: if Mutahar did teach himself everything a computer engineer would know, and he’s consistently demonstrated that skill set — well, then guess what? He is one.

    No, not by government certification. But in practice? If it walks like a computer engineer, talks like one, solves technical problems like one, and helps others like one… it probably is one. The only difference is a piece of paper and a license number. The internet isn’t full of licensed experts — it’s full of people who do the work, and Mutahar’s been doing the work for years.

    Now, some people — like TheArchfiend — are comparing him to creators like Boogie2988, who faked having cancer to manipulate his audience. Let’s be crystal clear: that is not the same. Boogie lied to gain emotional support and financial aid. That was exploitation. Mutahar exaggerated a job title, and nothing more. He didn’t scam anyone. He didn’t manipulate his fans. He didn’t take money under false pretenses.

    Same goes for people saying he’s a hypocrite for calling out MamaMax. Mutahar criticized MamaMax for content ethics and self-image — not for credentials. That’s an entirely different conversation than whether someone’s background lines up perfectly with what they claimed.

    Let’s not fall into false equivalency just because the word “lied” is involved. Context matters. Intent matters. Impact matters.

    So yeah, Mutahar shouldn’t have called himself an engineer if he wasn’t officially licensed. But the outrage is disproportionate, and the comparisons to actual grifters are ridiculous.

    You want to cancel someone for a serious offense? Go after the people who exploit their audiences emotionally, financially, or psychologically. Not the guy who gives you tech advice and malware breakdowns — and happens to know what he’s talking about.

    This isn’t the exposé people think it is.

    Let it go. Move on.

  • When the Rules Change Overnight: What Content Creators Are Worried About

    When the Rules Change Overnight: What Content Creators Are Worried About

    As a content creator, I’ve come to accept that platforms change. Algorithms shift. Trends evolve. What worked one week might flop the next. But every now and then, something bigger comes along — something that makes us stop and wonder: Are we about to see the internet change in a major way?

    Lately, there’s been a lot of buzz around a new bill called the SCREEN Act. It’s a proposal in Congress aiming to prevent minors from viewing explicit adult content online. On the surface, that sounds reasonable — after all, no one wants kids exposed to things they’re not ready for. But the way the bill plans to do this is raising some eyebrows.

    What’s being proposed is a form of age verification that could dramatically affect how all of us — not just kids — interact with the internet. And as a creator, that makes me a little uneasy.

    Here’s why:

    • Who decides what content is considered “explicit” or “harmful” for minors?
      Definitions can be vague, and that leaves room for overreach. Could educational material, discussions about identity, or even art be swept up in this?
    • Will platforms react by tightening their rules across the board?
      We’ve seen this before — when one kind of content becomes risky, platforms often cast a wider net to avoid lawsuits or backlash. That puts pressure on creators to censor themselves or risk demonetization, shadowbanning, or even removal.
    • Could creators be held responsible for who views their content?
      We already do our best to label content and follow platform rules. But it’s hard to control who clicks, who watches, or how old someone says they are. Are we now expected to police that too?

    This isn’t to say we don’t need better protections for young users online. We absolutely do. But we also need to be careful about how those protections are written into law — and what that means for people who rely on the internet to create, educate, and express themselves.

    As someone who creates with care and intention, I worry about being caught in the middle. I’m not here to post shocking or harmful material — but I also want the freedom to speak honestly, to tell stories, and to reach the people who need to hear them. New laws and policies have the potential to change that balance overnight.

    Whether the SCREEN Act passes or not, it’s a reminder that content creators aren’t just posting for fun — we’re navigating a complicated, evolving digital space where the rules are rarely clear, and the stakes are often high.

  • Age by Algorithm: Why YouTube’s New AI Age Checks Raise Big Questions for Creators and Viewers Alike

    Age by Algorithm: Why YouTube’s New AI Age Checks Raise Big Questions for Creators and Viewers Alike

    As creators, we know that the digital landscape is constantly evolving — new tools, new guidelines, and yes, new rules about who can see what and when. YouTube’s latest move? Using artificial intelligence to guess a viewer’s age, not based on their birthday, but on their behavior.

    That’s right. YouTube recently announced that it’s rolling out an AI-powered age detection system in the U.S. This system will estimate whether a user is over or under 18 by looking at what they watch, what they search for, and how long they’ve had their account — regardless of the birthdate they entered.

    For creators, this raises a lot of questions.

    1. Will our videos reach the intended audience?
    If someone is misclassified as a minor, they might be automatically excluded from seeing our content — even if it’s not inappropriate. That means creators could lose out on engagement, visibility, and potential revenue due to something as abstract as an algorithmic guess.

    2. What happens if the system gets it wrong?
    The burden falls on users to prove their age with a credit card, government ID, or selfie. This isn’t just a hassle — it’s a potential privacy concern, especially for users who don’t feel comfortable sharing such personal data online.

    3. What about nuance?
    Not all content is clearly “for kids” or “for adults.” Sometimes, it’s educational. Sometimes, it’s artistic. Will AI understand the difference? Or will creators start censoring themselves to avoid being caught in the system’s net?

    This rollout comes on the heels of broader regulatory trends — like the Kids Online Safety Act (KOSA) and the UK’s Online Safety Act — which aim to protect minors online. And while those goals are important, creators and digital users alike are increasingly worried that the methods used to “protect” may lead to overreach, mistrust, or unintended harm.

    YouTube says this approach has worked well in other countries and will be tested with a small group of U.S. users first. But even so, it’s important for us — as creators, viewers, and digital citizens — to pay attention. AI isn’t perfect. And when it’s used to gatekeep access, influence algorithms, or reshape who sees our work, the stakes are higher than ever.

    Let’s keep the conversation going. Let’s stay informed. And most of all, let’s advocate for smart solutions that protect young users without punishing creativity, curiosity, or community.

  • Creators and Congress: Why I’m Keeping an Eye on New Changes to Internet Laws

    Creators and Congress: Why I’m Keeping an Eye on New Changes to Internet Laws

    As someone who creates content online, I’m always paying attention to how the internet is changing — not just in terms of trends or technology, but also in terms of laws and policies. Recently, there’s been a lot of buzz about something called the Congressional Creators Caucus, and it got me thinking about what this might mean for people like me — and for the people who watch, read, or listen to our work.

    The Congressional Creators Caucus was launched earlier this year, and it’s meant to give digital content creators a stronger voice in Washington, D.C. It was supported by MatPat — a name many YouTube fans will recognize from Game Theory — and his wife Stephanie. They’ve been involved in the world of online content for a long time, so in some ways, it makes sense that they’d want to help creators be heard at the policy level.

    The idea of Congress listening to creators might sound exciting. And in some ways, it is. Creators work incredibly hard — often for long hours, with little financial certainty — and we face real challenges with algorithms, content rules, monetization changes, and staying safe online. Having lawmakers recognize those challenges is a step in the right direction.

    But I’m also cautious. Alongside this new caucus, there’s a federal bill called the Kids Online Safety Act (KOSA) that’s getting attention. On the surface, KOSA is about making the internet safer for kids, which is something we can all agree is important. But like a lot of things in government, the way a bill is written matters just as much as what it’s trying to do.

    Some creators, advocates, and privacy experts are worried that KOSA could go too far. Depending on how the rules are enforced, it could lead to too much content being taken down, especially posts that talk honestly about mental health, identity, or growing up. Others are concerned that it could require websites and apps to collect more personal information to “verify age,” which raises questions about online privacy — something that matters to everyone, no matter your age.

    I don’t want to sound alarmist. These conversations are still happening, and nothing is set in stone. But I do think it’s fair for creators to ask questions and stay informed. If policies like these change how we’re allowed to post, what we can share, or how audiences can find us, it’s going to affect not just creators — but also the communities we’ve built with our audiences over time.

    This isn’t about being for or against something politically. It’s about making sure we don’t rush into decisions that could unintentionally hurt the very people we’re trying to protect. We need laws that make the internet safer without silencing important voices or putting up walls between creators and their supporters.

    As someone who cares deeply about creativity, connection, and communication, I’m hopeful we can find the right balance. But until then, I’ll keep watching and speaking up — because the internet has given creators a place to thrive, and we shouldn’t lose that.