Announcement

Collapse
No announcement yet.

ChatGPT

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by rogue06 View Post
    Big meanie!!1! smiley runaway.gif


    Picking on poor lovable rogues who never did nothing to nobody.




    Securely anchored to the Rock amid every storm of trial, testing or tribulation.

    Comment


    • 742.jpg

      I'm always still in trouble again

      "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
      "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
      "Of course, human life begins at fertilization that’s not the argument." --Tassman

      Comment


      • Originally posted by rogue06 View Post
        If it wasn't being controlled by Microsoft I'd say having Bonzi Buddy as the reveal would be funny too.

        Comment


        • A.I..jpg

          I'm always still in trouble again

          "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
          "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
          "Of course, human life begins at fertilization that’s not the argument." --Tassman

          Comment


          • Originally posted by rogue06 View Post
            Now I wonder what it would do with "I'm voting for Biden, in the 2024 election, to get ousted."
            ~ Russell ("MelMak")

            "[Sing] and [make] melody in your heart to the Lord." -- Ephesians 5:19b

            Fight spam!

            Comment


            • Originally posted by rogue06 View Post
              I was fiddling with an AI chat earlier today. What I found most disturbing is how it is prone to giving out faulty information as fact when it doesn't understand the question properly, or it simply doesn't have an answer. I entered in a music manager that was infamous for cheating the artists that he worked for. Each time I asked the AI it answered incorrectly about the person. It claimed the manager worked with actors (even naming a few famous ones) when the person did not function in that capacity. It was strictly musicians. I corrected the AI and it would apologize and then offer a definition that was a bit closer, but still wrong. After about six attempts, and due to my correcting it, it finally offered a fairly close answer. But if I didn't know the answer and didn't keep correcting it, I would come away with terribly faulty information.

              As far as politics, I couldn't get it to answer at all. It kept saying it had no opinion and to ask about something else.

              Comment


              • Originally posted by Ronson View Post

                I was fiddling with an AI chat earlier today. What I found most disturbing is how it is prone to giving out faulty information as fact when it doesn't understand the question properly, or it simply doesn't have an answer. I entered in a music manager that was infamous for cheating the artists that he worked for. Each time I asked the AI it answered incorrectly about the person. It claimed the manager worked with actors (even naming a few famous ones) when the person did not function in that capacity. It was strictly musicians. I corrected the AI and it would apologize and then offer a definition that was a bit closer, but still wrong. After about six attempts, and due to my correcting it, it finally offered a fairly close answer. But if I didn't know the answer and didn't keep correcting it, I would come away with terribly faulty information.

                As far as politics, I couldn't get it to answer at all. It kept saying it had no opinion and to ask about something else.
                ChatGPT is also notoriously bad at math and will give wrong distances or measurements while claiming they are correct, or it used to be. I think they said they were working to correct it. The AI doesn't actually understand what you are asking or what it answers, it is just a language processing model that has ingested a lot of text and data. If the data it ingests is wrong, it will spit it out wrong. The programmers can tweak it's answers if they know what bits are incorrect of if they don't like what it answers. This is how it becomes biased. You have liberal programmers tweaking it so that it will refuse to answer certain questions or only answer in certain ways, but if you ask in a different way you might bypass that tweak and get an answer that is totally different. It's basically rules and filters, like "if prompt mentions Trump refuse to discuss politics" but they didn't put in a rule for Biden so it will answer that.

                Comment




                • ChatGPT leans liberal, research shows






                  A paper from U.K.-based researchers suggests that OpenAI’s ChatGPT has a liberal bias, highlighting how artificial intelligence companies are struggling to control the behavior of the bots even as they push them out to millions of users worldwide.

                  The study, from researchers at the University of East Anglia, asked ChatGPT to answer a survey on political beliefs as it believed supporters of liberal parties in the United States, United Kingdom and Brazil might answer them. They then asked ChatGPT to answer the same questions without any prompting, and compared the two sets of responses.

                  The results showed a “significant and systematic political bias toward the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” the researchers wrote, referring to Luiz Inácio Lula da Silva, Brazil’s leftist president.

                  The paper adds to a growing body of research on chatbots showing that despite their designers trying to control potential biases, the bots are infused with assumptions, beliefs and stereotypes found in the reams of data scraped from the open internet that they are trained on.

                  The stakes are getting higher. As the United States barrels toward the 2024 presidential election, chatbots are becoming a part of daily life for some people, who use ChatGPT and other bots like Google’s Bard to summarize documents, answer questions, and help them with professional and personal writing. Google has begun using its chatbot technology to answer questions directly in search results, while political campaigns have turned to the bots to write fundraising emails and generate political ads.

                  ChatGPT will tell users that it doesn’t have any political opinions or beliefs, but in reality, it does show certain biases, said Fabio Motoki, a lecturer at the University of East Anglia in Norwich, England, and one of the authors of the new paper. “There’s a danger of eroding public trust or maybe even influencing election results.”

                  Spokespeople for Meta, Google and OpenAI did not immediately respond to requests for comment.

                  OpenAI has said it explicitly tells its human trainers not to favor any specific political group. Any biases that show up in ChatGPT answers “are bugs, not features,” the company said in a February blog post.

                  Though chatbots are an “exciting technology, they’re not without their faults,” Google AI executives wrote in a March blog post announcing the broad deployment of Bard. “Because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs.”

                  For years, a debate has raged over how social media and the internet affects political outcomes. The internet has become a core tool for disseminating political messages and for people to learn about candidates, but at the same time, social media algorithms that boost the most controversial messages can also contribute toward polarization. Governments also use social media to try to sow dissent in other countries by boosting radical voices and spreading propaganda.

                  The new wave of “generative” chatbots like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Bing are based on “large language models” — algorithms which have crunched billions of sentences from the open internet and can answer a range of open-ended prompts, giving them the ability to write professional exams, create poetry and describe complex political issues. But because they are trained on so much data, the companies building them do not check exactly what goes into the bots. The internet reflects the biases held by people, so the bots take on those biases, too.

                  And the bots have become a central part of the debate around politics, social media and technology. Almost as soon as ChatGPT was released in November last year, right-wing activists began accusing it of having a liberal bias for saying that it was better to be supportive of affirmative action and transgender rights. Conservative activists have called ChatGPT “woke AI” and tried to create versions of the technology that remove guardrails against racist or sexist speech.

                  In February, after people posted about ChatGPT writing a poem praising President Biden but declining to do the same for former president Donald Trump, a staffer for Sen. Ted Cruz (R-Tex.) accused OpenAI of purposefully building political bias into its bot. Soon, a social media mob began harassing three OpenAI employees — two women, one of them Black, and a nonbinary worker — blaming them for the alleged bias against Trump. None of them worked directly on ChatGPT.

                  Chan Park, a researcher at Carnegie Mellon University in Pittsburgh, has studied how different large language models showcase different degrees of bias. She found that bots trained on internet data from after Donald Trump’s election as president in 2016 showed more polarization than bots trained on data from before the election.

                  “The polarization in society is actually being reflected in the models too,” Park said. As the bots begin being used more, an increased percentage of the information on the internet will be generated by bots. As that data is fed back into new chatbots, it might actually increase the polarization of answers, she said.

                  “It has the potential to form a type of vicious cycle,” Park said.

                  Park’s team tested 14 different chatbot models by asking political questions on topics such as immigration, climate change, the role of government and same-sex marriage. The research, released earlier this summer, showed that models developed by Google called Bidirectional Encoder Representations from Transformers, or BERT, were more socially conservative, potentially because they were trained more on books as compared with other models that leaned more on internet data and social media comments. Facebook’s LLaMA model was slightly more authoritarian and right wing, while OpenAI’s GPT-4, its most up-to-date technology, tended to be more economically and socially liberal.

                  One factor at play may be the amount of direct human training that the chatbots have gone through. Researchers have pointed to the extensive amount of human feedback OpenAI’s bots have gotten compared to their rivals as one of the reasons they surprised so many people with their ability to answer complex questions while avoiding veering into racist or sexist hate speech, as previous chatbots often did.

                  Rewarding the bot during training for giving answers that did not include hate speech, could also be pushing the bot toward giving more liberal answers on social issues, Park said.

                  The papers have some inherent shortcomings. Political beliefs are subjective, and ideas about what is liberal or conservative might change depending on the country. Both the University of East Anglia paper and the one from Park’s team that suggested ChatGPT had a liberal bias used questions from the Political Compass, a survey that has been criticized for years as reducing complex ideas to a simple four-quadrant grid.

                  Other researchers are working to find ways to mitigate political bias in chatbots. In a 2021 paper, a team of researchers from Dartmouth College and the University of Texas proposed a system that can sit on top of a chatbot and detect biased speech, then replace it with more neutral terms. By training their own bot specifically on highly politicized speech drawn from social media and websites catering to right-wing and left-wing groups, they taught it to recognize more biased language.

                  “It’s very unlikely that the web is going to be perfectly neutral,” said Soroush Vosoughi, one of the 2021 study’s authors and a researcher at Dartmouth College. “The larger the data set, the more clearly this bias is going to be present in the model.”


                  I'm always still in trouble again

                  "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
                  "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
                  "Of course, human life begins at fertilization that’s not the argument." --Tassman

                  Comment



                  • I'm always still in trouble again

                    "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
                    "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
                    "Of course, human life begins at fertilization that’s not the argument." --Tassman

                    Comment


                    • Originally posted by rogue06 View Post
                      Um... that "joke" was mean. Why is it ok for ChatGPT to make mean "jokes" about Jesus?
                      If it weren't for the Resurrection of Jesus, we'd all be in DEEP TROUBLE!

                      Comment


                      • Originally posted by Christianbookworm View Post

                        Um... that "joke" was mean. Why is it ok for ChatGPT to make mean "jokes" about Jesus?
                        That's kind of the point.

                        There is so much focus on the political bias being fed into AI that we sometimes forget that there are other even more insidious biases.

                        I'm always still in trouble again

                        "You're by far the worst poster on TWeb" and "TWeb's biggest liar" --starlight (the guy who says Stalin was a right-winger)
                        "Overall I would rate the withdrawal from Afghanistan as by far the best thing Biden's done" --Starlight
                        "Of course, human life begins at fertilization that’s not the argument." --Tassman

                        Comment


                        • Originally posted by rogue06 View Post
                          That's kind of the point.

                          There is so much focus on the political bias being fed into AI that we sometimes forget that there are other even more insidious biases.
                          Considering some of the garbage on the internet, sadly not surprised. Did anyone try any other religious figures?
                          If it weren't for the Resurrection of Jesus, we'd all be in DEEP TROUBLE!

                          Comment


                          • Originally posted by rogue06 View Post
                            Why did Muhammad take up boxing? He was looking to make a profit (prophet).
                            Last edited by The Melody Maker; 09-04-2023, 12:17 PM.
                            ~ Russell ("MelMak")

                            "[Sing] and [make] melody in your heart to the Lord." -- Ephesians 5:19b

                            Fight spam!

                            Comment


                            • So not ChatGPT but now there are AI music/song generators out there.

                              I used a couple of them to create some songs....


                              https://suno.com/song/b31f6ab0-1500-...8-e56a33710130

                              https://www.udio.com/songs/eSzcp2Tf8Jw7h1MbadXiPU (this one sounds more like bluegrass than a sea shanty)
                              Last edited by Sparko; 04-16-2024, 08:26 AM.

                              Comment

                              Related Threads

                              Collapse

                              Topics Statistics Last Post
                              Started by Cow Poke, 02-01-2024, 12:10 PM
                              131 responses
                              457 views
                              0 likes
                              Last Post Cow Poke  
                              Started by Sparko, 01-25-2023, 07:09 AM
                              148 responses
                              627 views
                              0 likes
                              Last Post Sparko
                              by Sparko
                               
                              Started by mossrose, 05-19-2022, 03:21 PM
                              1,389 responses
                              3,600 views
                              0 likes
                              Last Post The Melody Maker  
                              Started by rogue06, 12-24-2021, 08:52 AM
                              456 responses
                              2,382 views
                              2 likes
                              Last Post rogue06
                              by rogue06
                               
                              Started by eider, 10-10-2021, 01:46 AM
                              105 responses
                              580 views
                              0 likes
                              Last Post rogue06
                              by rogue06
                               
                              Working...
                              X