Announcement

Collapse

Civics 101 Guidelines

Want to argue about politics? Healthcare reform? Taxes? Governments? You've come to the right place!

Try to keep it civil though. The rules still apply here.
See more
See less

The Dystopian Side of Artificial Intelligence

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Dystopian Side of Artificial Intelligence

    I wanted to find a better place to post this, but after much consideration I guess into the Civics Dumpster Fire it goes...

    So, Artificial Intelligence has been making the waves lately in a lot of circles. Artists, Writers, Gamers, Voice Actors, Translators, and Tech Fanatics have all been talking about it. There's a lot of heated discussion surrounding it, but love it or hate it AI is here to stay.

    And that concerns me.

    To start, there's the, "DEY TERK ARE JERBS," aspect... which is a mixed bag. Sure, a lot of people are going to be out of a job and it's going to be harder to get into the Creatives as a profession, but even Creatives themselves admit that their line of work has been a haven for unemployable hacks for quite some time. I also don't know why some people are shocked that their line of work isn't immune to the Technological Innovation happening all around us.

    There's also the degradation of quality and aesthetics portion of the problem. Sure, the AI stuff seems cool, but I've noticed that folks with a sense of aesthetics can tell something is, 'off' about it, whereas people with experience in the relevant field can immediately pick apart every technical flaw. As one blog post I read a while back said, "The fact that an AI Picture could win an art contest says more about the decline in the quality of artists than it does the AI Art itself."

    Then there's my biggest beef with it: It's a dystopian Nightmare just waiting to happen. Especially the voice aspect.

    Here in Canada, since Bill C-11 was passed, Net Neutrality has been basically dead and they've been passing even more laws to desecrate it's corpse. The most recent Bill they're trying to pass is the Online Harms Act, which is your standard political Motte and Bailey... They address a lot of stuff that should have been addressed years ago, but they also lump in a bunch of stuff that's a bit more ambiguous or otherwise harmless. The long and short of it is that you can basically get Soap on a Rope for up to five years for posting hate speech and other shady things online (and I imagine being put on a Sex Offender Registry if your crimes were of that nature).

    That alone is scary enough, especially because they're going to be applying it to stuff that happened prior to whenever the Bill passes and Hate Speech is one of those things that's poorly defined and enforced. However, when you factor in that Voice Cloning AI is a thing, your voice constantly gets recorded (Customer Service Calls), and Databases gets compromised all the time, it's a recipe for disaster. All some vengeful jackass has to do is get enough samples of your voice and they can just upload it online pretending to be you, and then go to the cops with evidence that you're spewing stuff online that's normally reserved for Call of Duty Voice Chat. Granted, this could backfire hilariously if it's discovered that the person reporting it was the one who made the hateful comment, but I'd sooner trust a punch bowl in Jonestown than rely on modern law enforcement to do it's job properly.

    I don't think this is some sort of grand conspiracy, but it is a problem that needs to be addressed before it spirals out of control.
    Have You Touched Grass Today? If Not, Please Do.

  • #2
    Originally posted by Chaotic Void View Post
    I wanted to find a better place to post this, but after much consideration I guess into the Civics Dumpster Fire it goes...

    So, Artificial Intelligence has been making the waves lately in a lot of circles. Artists, Writers, Gamers, Voice Actors, Translators, and Tech Fanatics have all been talking about it. There's a lot of heated discussion surrounding it, but love it or hate it AI is here to stay.

    And that concerns me.

    To start, there's the, "DEY TERK ARE JERBS," aspect... which is a mixed bag. Sure, a lot of people are going to be out of a job and it's going to be harder to get into the Creatives as a profession, but even Creatives themselves admit that their line of work has been a haven for unemployable hacks for quite some time. I also don't know why some people are shocked that their line of work isn't immune to the Technological Innovation happening all around us.

    There's also the degradation of quality and aesthetics portion of the problem. Sure, the AI stuff seems cool, but I've noticed that folks with a sense of aesthetics can tell something is, 'off' about it, whereas people with experience in the relevant field can immediately pick apart every technical flaw. As one blog post I read a while back said, "The fact that an AI Picture could win an art contest says more about the decline in the quality of artists than it does the AI Art itself."

    Then there's my biggest beef with it: It's a dystopian Nightmare just waiting to happen. Especially the voice aspect.

    Here in Canada, since Bill C-11 was passed, Net Neutrality has been basically dead and they've been passing even more laws to desecrate it's corpse. The most recent Bill they're trying to pass is the Online Harms Act, which is your standard political Motte and Bailey... They address a lot of stuff that should have been addressed years ago, but they also lump in a bunch of stuff that's a bit more ambiguous or otherwise harmless. The long and short of it is that you can basically get Soap on a Rope for up to five years for posting hate speech and other shady things online (and I imagine being put on a Sex Offender Registry if your crimes were of that nature).

    That alone is scary enough, especially because they're going to be applying it to stuff that happened prior to whenever the Bill passes and Hate Speech is one of those things that's poorly defined and enforced. However, when you factor in that Voice Cloning AI is a thing, your voice constantly gets recorded (Customer Service Calls), and Databases gets compromised all the time, it's a recipe for disaster. All some vengeful jackass has to do is get enough samples of your voice and they can just upload it online pretending to be you, and then go to the cops with evidence that you're spewing stuff online that's normally reserved for Call of Duty Voice Chat. Granted, this could backfire hilariously if it's discovered that the person reporting it was the one who made the hateful comment, but I'd sooner trust a punch bowl in Jonestown than rely on modern law enforcement to do it's job properly.

    I don't think this is some sort of grand conspiracy, but it is a problem that needs to be addressed before it spirals out of control.
    Interestingly I was going to start a thread following an article I have just read in The New Yorker. I shall therefore post the content here. Caveat lector - there are two incidents of strong language that I have modified.

    https://www.newyorker.com/science/an...d=CRMNYR012019


    n a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. “I’m always, like, kind of one ear awake,” Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. “I’m, like, maybe it’s a butt-dial,” Robin said. “So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.”

    She picked up the phone, and, on the other end, she heard Mona’s voice wailing and repeating the words “I can’t do it, I can’t do it.” “I thought she was trying to tell me that some horrible tragic thing had happened,” Robin told me. Mona and her husband, Bob, are in their seventies. She’s a retired party planner, and he’s a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robin’s first thought was that there had been an accident. Robin’s parents also winter in Florida, and she pictured the four of them in a car wreck. “Your brain does weird things in the middle of the night,” she said. Robin then heard what sounded like Bob’s voice on the phone. (The family members requested that their names be changed to protect their privacy.) “Mona, pass me the phone,” Bob’s voice said, then, “Get Steve. Get Steve.” Robin took this—that they didn’t want to tell her while she was alone—as another sign of their seriousness. She shook Steve awake. “I think it’s your mom,” she told him. “I think she’s telling me something terrible happened.”

    Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. “She was screaming,” he recalled. “I thought her whole family was dead.” When he took the phone, he heard a relaxed male voice—possibly Southern—on the other end of the line. “You’re not gonna call the police,” the man said. “You’re not gonna tell anybody. I’ve got a gun to your mom’s head, and I’m gonna blow her brains out if you don’t do exactly what I say.”

    Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldn’t be heard. “You hear this???” Steve texted him. “What should I do?” The colleague wrote back, “Taking notes. Keep talking.” The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

    “I want to hear her voice,” Steve said to the man on the phone.

    The man refused. “If you ask me that again, I’m gonna kill her,” he said. “Are you ****ing crazy?”

    “O.K.,” Steve said. “What do you want?”

    The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. “It was such an insanely small amount of money for a human being,” Steve recalled. “But also: I’m obviously gonna pay this.” Robin, listening in, reasoned that someone had broken into Steve’s parents’ home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didn’t work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

    “Put in a pizza emoji,” the man said.

    After Steve sent the five hundred dollars, the man patched in a female voice—a girlfriend, it seemed—who said that the money had come through, but that it wasn’t enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. “Whoa, whoa, whoa,” he said. “Baby, I’ll call you later.” The implication, to Steve, was that the woman didn’t know about the hostage situation. “That made it even more real,” Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. “I’ve gotta get my baby mama down here to me,” he said. Steve sent the additional sum, and, when it processed, the man hung up.

    By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. “You guys did great,” the colleague said. He told them to call Bob, since Mona’s phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries,

    Bob picked up the phone and handed it to Mona. “Are you at home?” Steve and Robin asked her. “Are you O.K.?”

    Mona sounded fine, but she was unsure of what they were talking about. “Yeah, I’m in bed,” she replied. “Why?”

    Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandora’s box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraine’s President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firm’s senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved one’s voice. “We’ve now passed through the uncanny valley,” Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. “I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly what’s happening.”

    Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. “Hello, I’m Macintosh,” a squat machine announced to a live audience, at an unveiling with Steve Jobs. “It sure is great to get out of that bag.” The computer took potshots at Apple’s main competitor at the time, saying, “I’d like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you can’t lift.” In 2011, Apple released Siri; inspired by “Star Trek” ’s talking computers, the program could interpret precise commands—“Play Steely Dan,” say, or, “Call Mom”—and respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

    Still, until a few years ago, advances in synthetic voices had plateaued. They weren’t entirely convincing. “If I’m trying to create a better version of Siri or G.P.S., what I care about is naturalness,” Farid explained. “Does this sound like a human being and not like this creepy half-human, half-robot thing?” Replicating a specific voice is even harder. “Not only do I have to sound human,” Farid went on. “I have to sound like youThe New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New York’s mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddish—languages he does not speak. (Privacy advocates called this a “creepy vanity project.”)

    But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. “It’s simple,” Farid explained. “You take thirty or sixty seconds of a kid’s voice and log in to ElevenLabs, and pretty soon Grandma’s getting a call in Grandson’s voice saying, ‘Grandma, I’m in trouble, I’ve been in an accident.’ ” A financial request is almost always the end game. Farid went on, “And here’s the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. It’s a numbers game.” The prevalence of these illegal efforts is difficult to measure, but, anecdotally, they’ve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his son’s office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Biden’s voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) “I didn’t think about it at the time that it wasn’t his real voice,” an elderly Democrat in New Hampshire told the Associated Press. “That’s how convincing it was.”

    Predictably, technology has outstripped regulation. Current copyright laws don’t protect a person’s voice. “A key question is whether authentication tools can keep up with advances in deepfake synthesis,” Senator Jon Ossoff, of Georgia, who chaired a Senate Judiciary Committee hearing on the matter last year, told me. “Can we get good enough fast enough at discerning real from fake, or will we lose the ability to verify the authenticity of voices, images, video, and other media?” He described the matter as an “urgent” one for lawmakers. In January, a bipartisan group introduced the quiet Act, which would increase penalties for those who use A.I. to impersonate people. In Arizona, a state senator introduced a bill that would designate A.I. as a weapon when used in conjunction with a crime, also allowing lengthier sentences.

    The Federal Trade Commission, which investigates consumer fraud, reported that Americans lost more than two million dollars to impostor scams of various kinds in 2022. Last year, the F.T.C. put out a voice-cloning advisory, noting, “If the caller says to wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs, those could be signs of a scam.” But it, too, has not yet created any guidelines for the use of voice-cloning technology. Even if laws are enacted, policing them will be exceedingly difficult. Scammers can use encrypted apps to execute their schemes, and calls are completed in minutes. “By the time you get there, the scam is over, and everybody’s moved on,” Farid said.

    A decade ago, the F.T.C. sponsored a competition to counter the rise of robocalls, and one of its winners went on to create Nomorobo, a call-blocking service that has helped to reduce—but not eliminate—the phenomenon. Late last year, the commission offered a twenty-five-thousand-dollar prize for the development of new ways to protect consumers from voice cloning. It received around seventy-five submissions, which focus on prevention, authentication, and real-time detection. Some of the submissions use artificial intelligence, while others rely on metadata or watermarking. (Judging will be completed by April.) Will Maxson, who is managing the F.T.C.’s challenge, told me, “We’re hoping we’ll spur some innovators to come up with products and services that will help reduce this new threat.” But it’s not at all clear how effective they will be. “There are no silver bullets,” he acknowledged.

    A few months ago, Farid, the Berkeley professor, participated in a Zoom call with Barack Obama. The former President was interested, he said, in learning about generative A.I. During the Zoom call, Farid found himself in an increasingly familiar online state of mind: doubt. “People have spent so much time trying to make deepfakes with Obama that I spent, like, the first ten minutes being, like, I don’t know, man, I don’t think this is him,” he said, laughing. In the end, he determined that it was the real Obama. Still, the experience was unnerving. “Shit’s getting weird,” he said.

    One Friday last January, Jennifer DeStefano, who lives in Scottsdale, Arizona, got a call while walking into a dance studio where the younger of her two teen-age daughters, Aubrey, had just wrapped up a rehearsal. The caller I.D. read “unknown,” so DeStefano ignored it at first. Then she reconsidered: Brianna, her older daughter, was on a ski trip up north, and, DeStefano thought, maybe something had happened. She took the call on speaker phone. “Mom, I messed up!” Briana’s voice said, sobbing in her uniquely controlled way. A man with a Spanish accent could be heard telling her, “Lay down and put your head back.” Then Briana said, “Mom, these bad men have me. Help me, help me, help me.” One of the men took the phone, as Briana sobbed and pleaded in the background. “I have your daughter,” he said. “If you seek any help from anyone, I’ll pump her stomach so full of drugs.” He’d have his way with her, he continued, and then he’d leave her for dead.

    DeStefano ran into the dance studio and screamed for help. Three other mothers responded: one called 911, one called DeStefano’s husband, and one sat with DeStefano while she talked on the phone. First, the man demanded a million dollars, but DeStefano said that wasn’t possible, so he lowered the sum to fifty thousand. As they discussed how to get the money to him, the mother who’d called 911 came back inside and said that she’d learned that the call might be a scam. DeStefano, who considers herself “pretty savvy,” was unconvinced. “I talked to her,” DeStefano replied. She continued speaking to the man, who decided that he wanted to arrange a physical pickup of the money: a white van would meet DeStefano somewhere, and someone would put a bag over her head, and bring her to him. She recalled, “He said I had better have all the cash, or else we were both dead.”

    Soon, though, the second mother hurried over. She had located DeStefano’s husband, who confirmed that he was with Briana. DeStefano eventually got ahold of her older daughter. “I have no idea what’s going on, or what you’re talking about,” Briana told her. “I’m with Dad.” Eventually, DeStefano returned to her phone call. “I called the guys out for being the lowest of the low,” DeStefano said. “I used vulgar words. Then I just hung up.”

    DeStefano went public with her experience, eventually testifying about it before the Senate Judiciary Committee. Other victims reached out. Another mother at the dance studio had a cousin who’d been scammed just two weeks earlier. “The call came in from her daughter’s phone, and she actually sent fifteen hundred dollars,” DeStefano said. She told me that a friend had received a call from what sounded like her nine-year-old son: “He’d been kidnapped, he said. But she’d just tucked him in bed after reading a story, so she knew it wasn’t true.”

    RaeLee Jorgensen, a thirty-four-year-old teacher’s aide, contacted DeStefano. Last April, while waiting for her two youngest children to get out of school, she received a phone call from her oldest son’s number. “Hey, Mom,” her fourteen-year-old son’s voice said. “This is Tate.” He was using his family nickname. “And it was his voice,” Jorgensen told me. “But I could tell something was wrong. I asked what it was.” Then another voice said, “I have your son and I’m going to shoot him in the head.” Jorgensen panicked and she hung up. Ten minutes later, she received confirmation from Tate’s school that her son was safe, and now sitting in the principal’s office. Even DeStefano’s mother received a scam call. Months before DeStefano’s ordeal, someone had called her mother claiming to be DeStefano’s brother, and asking for money to pay a hospital bill related to a car accident. But DeStefano’s mother could sense that something was off. “She’s hard of hearing, but she’s still sharp,” DeStefano said. “She hung up.”

    Robin and Steve, in Brooklyn, eventually got their money back from Venmo. Today, they’re able to joke about some aspects of the ordeal: the pizza-emoji instruction, for example. “But we told everyone we knew to be aware of this very sophisticated thing,” Robin said. The family has created a plan for the next time. “It doesn’t seem like this scam is going to stop anytime soon,” Robin told me. “So we came up with an extended-family password. If one of us is in trouble, others can verify that it’s really them.” When I recently called up Mona, her mother-in-law, though, she confessed that she’d already forgotten the family password: “I’m going to have to go over it.” She added that it took her a while to accept one aspect of the call. “Seven hundred and fifty dollars,” she said. “I still can’t believe that’s all I was worth.” ♦


    "It ain't necessarily so
    The things that you're liable
    To read in the Bible
    It ain't necessarily so
    ."

    Sportin' Life
    Porgy & Bess, DuBose Heyward, George & Ira Gershwin

    Comment


    • #3
      I am too lazy to write a response, so I asked Microsoft Copilot to do it for me. Point 5 is the obligatory woke response that AI has to give nowadays.


      The impact of Artificial Intelligence (AI) on the workforce is a complex and multifaceted topic. Let’s explore some key points:
      1. Automation Potential:
      2. Displacement Effect:
      3. Job Creation:
        • AI and related technologies can boost economic growth, leading to the creation of additional job opportunities.
        • Just as past technological innovations (such as steam engines and computers) generated new jobs, AI systems and robots can enhance productivity, reduce costs, and improve product quality.
      4. Skill Adaptation:
        • Workers with strong digital skills are better positioned to adapt to and use AI at work. These skills enable them to reap the benefits of AI technologies.
        • Upskilling and reskilling programs are crucial to help workers transition into new roles and acquire the necessary skills.
      5. Inequality and Discrimination:
        • While AI can enhance productivity, it can also exacerbate inequality and lead to discrimination against workers.
        • Ensuring equitable access to AI benefits and addressing potential biases are essential considerations.

      In summary, AI will transform the workforce, but it won’t necessarily replace humans outright.

      Comment


      • #4
        Originally posted by Hypatia_Alexandria View Post

        Interestingly I was going to start a thread following an article I have just read in The New Yorker. I shall therefore post the content here. Caveat lector - there are two incidents of strong language that I have modified.

        https://www.newyorker.com/science/an...d=CRMNYR012019


        n a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. “I’m always, like, kind of one ear awake,” Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. “I’m, like, maybe it’s a butt-dial,” Robin said. “So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.”

        She picked up the phone, and, on the other end, she heard Mona’s voice wailing and repeating the words “I can’t do it, I can’t do it.” “I thought she was trying to tell me that some horrible tragic thing had happened,” Robin told me. Mona and her husband, Bob, are in their seventies. She’s a retired party planner, and he’s a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robin’s first thought was that there had been an accident. Robin’s parents also winter in Florida, and she pictured the four of them in a car wreck. “Your brain does weird things in the middle of the night,” she said. Robin then heard what sounded like Bob’s voice on the phone. (The family members requested that their names be changed to protect their privacy.) “Mona, pass me the phone,” Bob’s voice said, then, “Get Steve. Get Steve.” Robin took this—that they didn’t want to tell her while she was alone—as another sign of their seriousness. She shook Steve awake. “I think it’s your mom,” she told him. “I think she’s telling me something terrible happened.”

        Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. “She was screaming,” he recalled. “I thought her whole family was dead.” When he took the phone, he heard a relaxed male voice—possibly Southern—on the other end of the line. “You’re not gonna call the police,” the man said. “You’re not gonna tell anybody. I’ve got a gun to your mom’s head, and I’m gonna blow her brains out if you don’t do exactly what I say.”

        Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldn’t be heard. “You hear this???” Steve texted him. “What should I do?” The colleague wrote back, “Taking notes. Keep talking.” The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

        “I want to hear her voice,” Steve said to the man on the phone.

        The man refused. “If you ask me that again, I’m gonna kill her,” he said. “Are you ****ing crazy?”

        “O.K.,” Steve said. “What do you want?”

        The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. “It was such an insanely small amount of money for a human being,” Steve recalled. “But also: I’m obviously gonna pay this.” Robin, listening in, reasoned that someone had broken into Steve’s parents’ home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didn’t work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

        “Put in a pizza emoji,” the man said.

        After Steve sent the five hundred dollars, the man patched in a female voice—a girlfriend, it seemed—who said that the money had come through, but that it wasn’t enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. “Whoa, whoa, whoa,” he said. “Baby, I’ll call you later.” The implication, to Steve, was that the woman didn’t know about the hostage situation. “That made it even more real,” Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. “I’ve gotta get my baby mama down here to me,” he said. Steve sent the additional sum, and, when it processed, the man hung up.

        By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. “You guys did great,” the colleague said. He told them to call Bob, since Mona’s phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries,

        Bob picked up the phone and handed it to Mona. “Are you at home?” Steve and Robin asked her. “Are you O.K.?”

        Mona sounded fine, but she was unsure of what they were talking about. “Yeah, I’m in bed,” she replied. “Why?”

        Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandora’s box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraine’s President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firm’s senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved one’s voice. “We’ve now passed through the uncanny valley,” Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. “I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly what’s happening.”

        Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. “Hello, I’m Macintosh,” a squat machine announced to a live audience, at an unveiling with Steve Jobs. “It sure is great to get out of that bag.” The computer took potshots at Apple’s main competitor at the time, saying, “I’d like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you can’t lift.” In 2011, Apple released Siri; inspired by “Star Trek” ’s talking computers, the program could interpret precise commands—“Play Steely Dan,” say, or, “Call Mom”—and respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

        Still, until a few years ago, advances in synthetic voices had plateaued. They weren’t entirely convincing. “If I’m trying to create a better version of Siri or G.P.S., what I care about is naturalness,” Farid explained. “Does this sound like a human being and not like this creepy half-human, half-robot thing?” Replicating a specific voice is even harder. “Not only do I have to sound human,” Farid went on. “I have to sound like youThe New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New York’s mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddish—languages he does not speak. (Privacy advocates called this a “creepy vanity project.”)

        But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. “It’s simple,” Farid explained. “You take thirty or sixty seconds of a kid’s voice and log in to ElevenLabs, and pretty soon Grandma’s getting a call in Grandson’s voice saying, ‘Grandma, I’m in trouble, I’ve been in an accident.’ ” A financial request is almost always the end game. Farid went on, “And here’s the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. It’s a numbers game.” The prevalence of these illegal efforts is difficult to measure, but, anecdotally, they’ve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his son’s office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Biden’s voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) “I didn’t think about it at the time that it wasn’t his real voice,” an elderly Democrat in New Hampshire told the Associated Press. “That’s how convincing it was.”

        Predictably, technology has outstripped regulation. Current copyright laws don’t protect a person’s voice. “A key question is whether authentication tools can keep up with advances in deepfake synthesis,” Senator Jon Ossoff, of Georgia, who chaired a Senate Judiciary Committee hearing on the matter last year, told me. “Can we get good enough fast enough at discerning real from fake, or will we lose the ability to verify the authenticity of voices, images, video, and other media?” He described the matter as an “urgent” one for lawmakers. In January, a bipartisan group introduced the quiet Act, which would increase penalties for those who use A.I. to impersonate people. In Arizona, a state senator introduced a bill that would designate A.I. as a weapon when used in conjunction with a crime, also allowing lengthier sentences.

        The Federal Trade Commission, which investigates consumer fraud, reported that Americans lost more than two million dollars to impostor scams of various kinds in 2022. Last year, the F.T.C. put out a voice-cloning advisory, noting, “If the caller says to wire money, send cryptocurrency, or buy gift cards and give them the card numbers and PINs, those could be signs of a scam.” But it, too, has not yet created any guidelines for the use of voice-cloning technology. Even if laws are enacted, policing them will be exceedingly difficult. Scammers can use encrypted apps to execute their schemes, and calls are completed in minutes. “By the time you get there, the scam is over, and everybody’s moved on,” Farid said.

        A decade ago, the F.T.C. sponsored a competition to counter the rise of robocalls, and one of its winners went on to create Nomorobo, a call-blocking service that has helped to reduce—but not eliminate—the phenomenon. Late last year, the commission offered a twenty-five-thousand-dollar prize for the development of new ways to protect consumers from voice cloning. It received around seventy-five submissions, which focus on prevention, authentication, and real-time detection. Some of the submissions use artificial intelligence, while others rely on metadata or watermarking. (Judging will be completed by April.) Will Maxson, who is managing the F.T.C.’s challenge, told me, “We’re hoping we’ll spur some innovators to come up with products and services that will help reduce this new threat.” But it’s not at all clear how effective they will be. “There are no silver bullets,” he acknowledged.

        A few months ago, Farid, the Berkeley professor, participated in a Zoom call with Barack Obama. The former President was interested, he said, in learning about generative A.I. During the Zoom call, Farid found himself in an increasingly familiar online state of mind: doubt. “People have spent so much time trying to make deepfakes with Obama that I spent, like, the first ten minutes being, like, I don’t know, man, I don’t think this is him,” he said, laughing. In the end, he determined that it was the real Obama. Still, the experience was unnerving. “Shit’s getting weird,” he said.

        One Friday last January, Jennifer DeStefano, who lives in Scottsdale, Arizona, got a call while walking into a dance studio where the younger of her two teen-age daughters, Aubrey, had just wrapped up a rehearsal. The caller I.D. read “unknown,” so DeStefano ignored it at first. Then she reconsidered: Brianna, her older daughter, was on a ski trip up north, and, DeStefano thought, maybe something had happened. She took the call on speaker phone. “Mom, I messed up!” Briana’s voice said, sobbing in her uniquely controlled way. A man with a Spanish accent could be heard telling her, “Lay down and put your head back.” Then Briana said, “Mom, these bad men have me. Help me, help me, help me.” One of the men took the phone, as Briana sobbed and pleaded in the background. “I have your daughter,” he said. “If you seek any help from anyone, I’ll pump her stomach so full of drugs.” He’d have his way with her, he continued, and then he’d leave her for dead.

        DeStefano ran into the dance studio and screamed for help. Three other mothers responded: one called 911, one called DeStefano’s husband, and one sat with DeStefano while she talked on the phone. First, the man demanded a million dollars, but DeStefano said that wasn’t possible, so he lowered the sum to fifty thousand. As they discussed how to get the money to him, the mother who’d called 911 came back inside and said that she’d learned that the call might be a scam. DeStefano, who considers herself “pretty savvy,” was unconvinced. “I talked to her,” DeStefano replied. She continued speaking to the man, who decided that he wanted to arrange a physical pickup of the money: a white van would meet DeStefano somewhere, and someone would put a bag over her head, and bring her to him. She recalled, “He said I had better have all the cash, or else we were both dead.”

        Soon, though, the second mother hurried over. She had located DeStefano’s husband, who confirmed that he was with Briana. DeStefano eventually got ahold of her older daughter. “I have no idea what’s going on, or what you’re talking about,” Briana told her. “I’m with Dad.” Eventually, DeStefano returned to her phone call. “I called the guys out for being the lowest of the low,” DeStefano said. “I used vulgar words. Then I just hung up.”

        DeStefano went public with her experience, eventually testifying about it before the Senate Judiciary Committee. Other victims reached out. Another mother at the dance studio had a cousin who’d been scammed just two weeks earlier. “The call came in from her daughter’s phone, and she actually sent fifteen hundred dollars,” DeStefano said. She told me that a friend had received a call from what sounded like her nine-year-old son: “He’d been kidnapped, he said. But she’d just tucked him in bed after reading a story, so she knew it wasn’t true.”

        RaeLee Jorgensen, a thirty-four-year-old teacher’s aide, contacted DeStefano. Last April, while waiting for her two youngest children to get out of school, she received a phone call from her oldest son’s number. “Hey, Mom,” her fourteen-year-old son’s voice said. “This is Tate.” He was using his family nickname. “And it was his voice,” Jorgensen told me. “But I could tell something was wrong. I asked what it was.” Then another voice said, “I have your son and I’m going to shoot him in the head.” Jorgensen panicked and she hung up. Ten minutes later, she received confirmation from Tate’s school that her son was safe, and now sitting in the principal’s office. Even DeStefano’s mother received a scam call. Months before DeStefano’s ordeal, someone had called her mother claiming to be DeStefano’s brother, and asking for money to pay a hospital bill related to a car accident. But DeStefano’s mother could sense that something was off. “She’s hard of hearing, but she’s still sharp,” DeStefano said. “She hung up.”

        Robin and Steve, in Brooklyn, eventually got their money back from Venmo. Today, they’re able to joke about some aspects of the ordeal: the pizza-emoji instruction, for example. “But we told everyone we knew to be aware of this very sophisticated thing,” Robin said. The family has created a plan for the next time. “It doesn’t seem like this scam is going to stop anytime soon,” Robin told me. “So we came up with an extended-family password. If one of us is in trouble, others can verify that it’s really them.” When I recently called up Mona, her mother-in-law, though, she confessed that she’d already forgotten the family password: “I’m going to have to go over it.” She added that it took her a while to accept one aspect of the call. “Seven hundred and fifty dollars,” she said. “I still can’t believe that’s all I was worth.” ♦

        Yeah, the Scammer-Empowering aspect is another problem that needs to be addressed. I can't believe I forgot about it, consdering that Knockoff Wonka Scam in Scotland is fresh in my memory.

        Unfortunately, the legistators and law enforcers tend to take their sweet time adapting to cyber-crimes like this.
        Have You Touched Grass Today? If Not, Please Do.

        Comment


        • #5
          Originally posted by Sparko View Post
          I am too lazy to write a response, so I asked Microsoft Copilot to do it for me. Point 5 is the obligatory woke response that AI has to give nowadays.


          The impact of Artificial Intelligence (AI) on the workforce is a complex and multifaceted topic. Let’s explore some key points:
          1. Automation Potential:
          2. Displacement Effect:
          3. Job Creation:
            • AI and related technologies can boost economic growth, leading to the creation of additional job opportunities.
            • Just as past technological innovations (such as steam engines and computers) generated new jobs, AI systems and robots can enhance productivity, reduce costs, and improve product quality.
          4. Skill Adaptation:
            • Workers with strong digital skills are better positioned to adapt to and use AI at work. These skills enable them to reap the benefits of AI technologies.
            • Upskilling and reskilling programs are crucial to help workers transition into new roles and acquire the necessary skills.
          5. Inequality and Discrimination:
            • While AI can enhance productivity, it can also exacerbate inequality and lead to discrimination against workers.
            • Ensuring equitable access to AI benefits and addressing potential biases are essential considerations.

          In summary, AI will transform the workforce, but it won’t necessarily replace humans outright.
          Good one. Now get it to produce images of WW2 German Soldiers and watch the hilarity ensue.

          One of the stronger points for Pro AI folks is the Democratization of Creativity. It'll allow Content Creators to generate stuff that they could not otherwise, like a Writer needing a book cover. One author I watch used a couple of AI images and Photoshop to make a book cover for one of his Novels.

          Funny enough, Point Number 5 is also an indication that the AI has been Braked. Now hopefully it gets actually good Limits set it before we get a Scamming Spree or generated Audio/Video footage of a Political Opponent using Gamer Words.
          Have You Touched Grass Today? If Not, Please Do.

          Comment


          • #6
            Originally posted by Chaotic Void View Post

            Yeah, the Scammer-Empowering aspect is another problem that needs to be addressed. I can't believe I forgot about it, consdering that Knockoff Wonka Scam in Scotland is fresh in my memory.

            Unfortunately, the legistators and law enforcers tend to take their sweet time adapting to cyber-crimes like this.
            If you can access BBC radio there is a very good three part series on Radio 4 entitled The Rise and Rise of the Microchip. It is presented by Mischa Glenny of whom you may have heard. I listened to the final part today and the age old story persists, those who are pushing for A.I. and developing the technology assure us that everything is checked and all the technology is secure.
            "It ain't necessarily so
            The things that you're liable
            To read in the Bible
            It ain't necessarily so
            ."

            Sportin' Life
            Porgy & Bess, DuBose Heyward, George & Ira Gershwin

            Comment


            • #7
              Originally posted by Hypatia_Alexandria View Post

              If you can access BBC radio there is a very good three part series on Radio 4 entitled The Rise and Rise of the Microchip. It is presented by Mischa Glenny of whom you may have heard. I listened to the final part today and the age old story persists, those who are pushing for A.I. and developing the technology assure us that everything is checked and all the technology is secure.
              I am Canadian so we get British Content all the time (I remember the old BBC Chronicles of Narnia TV adaptations). I'll take a look and listen sometime.
              Have You Touched Grass Today? If Not, Please Do.

              Comment


              • #8
                Originally posted by Chaotic Void View Post

                I am Canadian so we get British Content all the time (I remember the old BBC Chronicles of Narnia TV adaptations). I'll take a look and listen sometime.
                Oh of course.

                I enjoy the radio and discovered the BBC it when I lived in the UK many years ago. My affection for it has never left me and I also listen to the World Service.
                "It ain't necessarily so
                The things that you're liable
                To read in the Bible
                It ain't necessarily so
                ."

                Sportin' Life
                Porgy & Bess, DuBose Heyward, George & Ira Gershwin

                Comment


                • #9
                  Originally posted by Chaotic Void View Post
                  I wanted to find a better place to post this, but after much consideration I guess into the Civics Dumpster Fire it goes...

                  So, Artificial Intelligence has been making the waves lately in a lot of circles. Artists, Writers, Gamers, Voice Actors, Translators, and Tech Fanatics have all been talking about it. There's a lot of heated discussion surrounding it, but love it or hate it AI is here to stay.

                  And that concerns me.

                  To start, there's the, "DEY TERK ARE JERBS," aspect... which is a mixed bag. Sure, a lot of people are going to be out of a job and it's going to be harder to get into the Creatives as a profession, but even Creatives themselves admit that their line of work has been a haven for unemployable hacks for quite some time. I also don't know why some people are shocked that their line of work isn't immune to the Technological Innovation happening all around us.

                  There's also the degradation of quality and aesthetics portion of the problem. Sure, the AI stuff seems cool, but I've noticed that folks with a sense of aesthetics can tell something is, 'off' about it, whereas people with experience in the relevant field can immediately pick apart every technical flaw. As one blog post I read a while back said, "The fact that an AI Picture could win an art contest says more about the decline in the quality of artists than it does the AI Art itself."

                  Then there's my biggest beef with it: It's a dystopian Nightmare just waiting to happen. Especially the voice aspect.

                  Here in Canada, since Bill C-11 was passed, Net Neutrality has been basically dead and they've been passing even more laws to desecrate it's corpse. The most recent Bill they're trying to pass is the Online Harms Act, which is your standard political Motte and Bailey... They address a lot of stuff that should have been addressed years ago, but they also lump in a bunch of stuff that's a bit more ambiguous or otherwise harmless. The long and short of it is that you can basically get Soap on a Rope for up to five years for posting hate speech and other shady things online (and I imagine being put on a Sex Offender Registry if your crimes were of that nature).

                  That alone is scary enough, especially because they're going to be applying it to stuff that happened prior to whenever the Bill passes and Hate Speech is one of those things that's poorly defined and enforced. However, when you factor in that Voice Cloning AI is a thing, your voice constantly gets recorded (Customer Service Calls), and Databases gets compromised all the time, it's a recipe for disaster. All some vengeful jackass has to do is get enough samples of your voice and they can just upload it online pretending to be you, and then go to the cops with evidence that you're spewing stuff online that's normally reserved for Call of Duty Voice Chat. Granted, this could backfire hilariously if it's discovered that the person reporting it was the one who made the hateful comment, but I'd sooner trust a punch bowl in Jonestown than rely on modern law enforcement to do it's job properly.

                  I don't think this is some sort of grand conspiracy, but it is a problem that needs to be addressed before it spirals out of control.
                  I agree but it’s too late. I think AI is a runaway problem in the same way pollution is. We need petroleum products in our everyday lives but didn’t (couldn’t) plan for petroleum’s devastating effect. We need AI but didn’t (couldn’t) plan for its complications, like the deepfake one you mentioned. Just as worrying (but more from an economic perspective): every industry that banks on AI being a windfall will soon realize the workers they let go are the ones who feed the economy.

                  I use AI in my work for efficiency tasks that I used to do manually. I’m enjoying its utility but bracing myself for the inevitable trainwreck ahead as those problems unfold.

                  Comment


                  • #10
                    Originally posted by whag View Post

                    I agree but it’s too late. I think AI is a runaway problem in the same way pollution is. We need petroleum products in our everyday lives but didn’t (couldn’t) plan for petroleum’s devastating effect. We need AI but didn’t (couldn’t) plan for its complications, like the deepfake one you mentioned. Just as worrying (but more from an economic perspective): every industry that banks on AI being a windfall will soon realize the workers they let go are the ones who feed the economy.

                    I use AI in my work for efficiency tasks that I used to do manually. I’m enjoying its utility but bracing myself for the inevitable trainwreck ahead as those problems unfold.
                    I think at first you will see some idiots lay off their workers thinking they can get AI to do their work for them, but then realize they have no idea how to manage an AI, especially for creative purposes. Like programming. They probably think they can just tell an AI to "Write me an app to do X" but will soon realize that they need to do more than just give an AI a vague goal. Just like humans, designing and programming software takes lots of planning, engineering and testing. And some higher level manager has no idea how to do any of that. But a programmer could probably use AI to augment his skills, having it write various specialized subroutines or interfaces, basically speeding up his work and eliminating the drudgery. I think that is how AI will eventually be used. As an augmentation of a person's abilities and skills, not as a replacement of that person and his skills.

                    Now for unskilled labor (fast food restaurants for example) I think that AI could take the place of a lot of the people there, if it is supplemented by automation. I could see an entirely automated McDonald's kiosk, with an AI "running" the show. Just need someone to keep the materials stocked and maintain the automation and AI.

                    Comment


                    • #11
                      Originally posted by Chaotic Void View Post
                      I wanted to find a better place to post this, but after much consideration I guess into the Civics Dumpster Fire it goes...

                      So, Artificial Intelligence has been making the waves lately in a lot of circles. Artists, Writers, Gamers, Voice Actors, Translators, and Tech Fanatics have all been talking about it. There's a lot of heated discussion surrounding it, but love it or hate it AI is here to stay.

                      And that concerns me.

                      To start, there's the, "DEY TERK ARE JERBS," aspect... which is a mixed bag. Sure, a lot of people are going to be out of a job and it's going to be harder to get into the Creatives as a profession, but even Creatives themselves admit that their line of work has been a haven for unemployable hacks for quite some time. I also don't know why some people are shocked that their line of work isn't immune to the Technological Innovation happening all around us.

                      There's also the degradation of quality and aesthetics portion of the problem. Sure, the AI stuff seems cool, but I've noticed that folks with a sense of aesthetics can tell something is, 'off' about it, whereas people with experience in the relevant field can immediately pick apart every technical flaw. As one blog post I read a while back said, "The fact that an AI Picture could win an art contest says more about the decline in the quality of artists than it does the AI Art itself."

                      Then there's my biggest beef with it: It's a dystopian Nightmare just waiting to happen. Especially the voice aspect.

                      Here in Canada, since Bill C-11 was passed, Net Neutrality has been basically dead and they've been passing even more laws to desecrate it's corpse. The most recent Bill they're trying to pass is the Online Harms Act, which is your standard political Motte and Bailey... They address a lot of stuff that should have been addressed years ago, but they also lump in a bunch of stuff that's a bit more ambiguous or otherwise harmless. The long and short of it is that you can basically get Soap on a Rope for up to five years for posting hate speech and other shady things online (and I imagine being put on a Sex Offender Registry if your crimes were of that nature).

                      That alone is scary enough, especially because they're going to be applying it to stuff that happened prior to whenever the Bill passes and Hate Speech is one of those things that's poorly defined and enforced. However, when you factor in that Voice Cloning AI is a thing, your voice constantly gets recorded (Customer Service Calls), and Databases gets compromised all the time, it's a recipe for disaster. All some vengeful jackass has to do is get enough samples of your voice and they can just upload it online pretending to be you, and then go to the cops with evidence that you're spewing stuff online that's normally reserved for Call of Duty Voice Chat. Granted, this could backfire hilariously if it's discovered that the person reporting it was the one who made the hateful comment, but I'd sooner trust a punch bowl in Jonestown than rely on modern law enforcement to do it's job properly.

                      I don't think this is some sort of grand conspiracy, but it is a problem that needs to be addressed before it spirals out of control.
                      AI is cool as a beefed-up search engine, but once they start deploying it in autonomous weapon systems and drones, that's when it could become a nightmare. The deepfake stuff is also pretty frightening. So far, a person who's seen deepfakes can tell the difference, but at some point, reality and deepfakes will be indistinguishable.

                      Comment


                      • #12
                        Given the increasing controls placed on society, I think the following is the most likely scenario we will end up seeing.

                        Comment


                        • #13
                          Originally posted by whag View Post

                          I agree but it’s too late. I think AI is a runaway problem in the same way pollution is. We need petroleum products in our everyday lives but didn’t (couldn’t) plan for petroleum’s devastating effect. We need AI but didn’t (couldn’t) plan for its complications, like the deepfake one you mentioned. Just as worrying (but more from an economic perspective): every industry that banks on AI being a windfall will soon realize the workers they let go are the ones who feed the economy.

                          I use AI in my work for efficiency tasks that I used to do manually. I’m enjoying its utility but bracing myself for the inevitable trainwreck ahead as those problems unfold.
                          Yeah, there's definitely some nasty chickens coming home to roost... but I think there's hope that course correction will still happen. As an example... Self-Checkouts. Sure, some stores benefit from the fact they have less staff to pay, but if stores start losing too much money from maintenance costs and shoplifters, then they'll be reduced or phased out entirely. I know one store in town got rid of all of their Self-Checkouts because there was so much theft after they were installed.
                          Have You Touched Grass Today? If Not, Please Do.

                          Comment


                          • #14
                            Originally posted by seanD View Post

                            AI is cool as a beefed-up search engine, but once they start deploying it in autonomous weapon systems and drones, that's when it could become a nightmare. The deepfake stuff is also pretty frightening. So far, a person who's seen deepfakes can tell the difference, but at some point, reality and deepfakes will be indistinguishable.
                            Perhaps to an untrained eye, but someone has the relevant skills and experience they could easily dismantle it. I remember watching a video of a professional artist criticizing Shadiversity, a YouTuber and a known AI Art Apologist, where he picked apart every problem he could find in Shad's magnum opus of AI Art. To me, the image looked pretty good (though it still felt, 'off'), but the artist was quick to point out all the technical flaws. I think long-term we're going to have to either teach people how to spot these sorts of flaws, or rely on people who know what they're doing.
                            Have You Touched Grass Today? If Not, Please Do.

                            Comment


                            • #15
                              Originally posted by Chaotic Void View Post

                              Yeah, the Scammer-Empowering aspect is another problem that needs to be addressed. I can't believe I forgot about it, consdering that Knockoff Wonka Scam in Scotland is fresh in my memory.

                              Unfortunately, the legistators and law enforcers tend to take their sweet time adapting to cyber-crimes like this.
                              It seems to be a general trend these days that the government comes months or years late to the problem. In this case, the government regulation, at least in the US, will be some form of let the industry regulate itself.
                              "For I desire mercy, not sacrifice, and acknowledgment of God rather than burnt offerings." Hosea 6:6

                              "Theology can be an intellectual entertainment." Metropolitan Anthony Bloom

                              Comment

                              Related Threads

                              Collapse

                              Topics Statistics Last Post
                              Started by Slave4Christ, Yesterday, 07:59 PM
                              0 responses
                              19 views
                              0 likes
                              Last Post Slave4Christ  
                              Started by rogue06, 06-29-2024, 03:49 PM
                              18 responses
                              156 views
                              0 likes
                              Last Post Ronson
                              by Ronson
                               
                              Started by seer, 06-28-2024, 11:42 AM
                              39 responses
                              204 views
                              0 likes
                              Last Post Stoic
                              by Stoic
                               
                              Started by Cow Poke, 06-28-2024, 10:24 AM
                              23 responses
                              167 views
                              0 likes
                              Last Post Ronson
                              by Ronson
                               
                              Started by VonTastrophe, 06-28-2024, 10:22 AM
                              33 responses
                              195 views
                              0 likes
                              Last Post Slave4Christ  
                              Working...
                              X