Announcement

Collapse

Natural Science 301 Guidelines

This is an open forum area for all members for discussions on all issues of science and origins. This area will and does get volatile at times, but we ask that it be kept to a dull roar, and moderators will intervene to keep the peace if necessary. This means obvious trolling and flaming that becomes a problem will be dealt with, and you might find yourself in the doghouse.

As usual, Tweb rules apply. If you haven't read them now would be a good time.

Forum Rules: Here
See more
See less

Mind reading AI, oh my!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Sparko View Post
    Originally posted by The Lurch
    Yeah, definitely possible - I acknowledged things could improve in my first response in this thread. It's just a hard problem. It's easy to get detailed, real time, higher resolution data, but it requires brain surgery to stick electrodes into someone's head. And we can get real time but very low resolution data by sticking a bunch of electrodes on someone's scalp. So, there's definitely a lot of work being put into several parallel approaches, but I've not seen any indication of rapid advances in any of them.

    And it would be really nice to see progress, since there's a lot of medical conditions where better tech this sort could be life changing for a lot of people.
    Maybe AI can enhance the resolution of the external electrode setup, similar to the way AI can enhance the resolution of pictures using predictive techniques. And really they only need to be able to read two areas, the speech center and the motor cortex. The later for helping disabled people move artificial limbs and such.
    Scientists read brain waves to recreate Pink Floyd's Another Brick in the Wall. This uses just external electrodes and software and not fMRI or implanted electrodes. So basically this means such devices can be made smaller as the technology progresses. Still really rough and low res but it shows the technology is moving forward pretty quickly.




    Comment


    • #17
      Originally posted by TheLurch View Post
      Well, fMRI has been around for over 30 years. It's gotten better in terms of resolution since that time in the same way that transistors improved since their invention. But no alternative technology's come along that can give us the same information without a room-sized supercooled magnet yet.
      I do not consider this AI technology truly mind reading. It is the same thing that humans do all the time inferring further formation from limited information provided by the person. People do not realize how transparent they are in interactions of everyday life, AI is just becoming more sophisticated at doing this.
      Glendower: I can call spirits from the vasty deep.
      Hotspur: Why, so can I, or so can any man;
      But will they come when you do call for them? Shakespeare’s Henry IV, Part 1, Act III:

      go with the flow the river knows . . .

      Frank

      I do not know, therefore everything is in pencil.

      Comment


      • #18
        There is a significant advance in technology where the images people see can be traced, monitored, and repeated the images using brain scans. What these and other similar advances in AI and other technology achieve is linking consciousness and intelligence to the direct function of the brain.


        Source: https://www.science.org/content/article/ai-re-creates-what-people-see-reading-their-brain-scans



        AI re-creates what people see by reading their brain scans

        A new artificial intelligence system can reconstruct images a person saw based on their brain activity

        7 MAR 2023 BYKAMAL NAHAS
        As neuroscientists struggle to demystify how the human brain converts what our eyes see into mental images, artificial intelligence (AI) has been getting better at mimicking that feat. A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.

        Many labs have used AI to read brain scans and re-create images a subject has recently seen, such as human faces and photos of landscapes. The new study marks the first time an AI algorithm called Stable Diffusion, developed by a German group and publicly released in 2022, has been used to do this. Stable Diffusion is similar to other text-to-image “generative” AIs such as DALL-E 2 and Midjourney, which produce new images from text prompts after being trained on billions of images associated with text descriptions.

        For the new study, a group in Japan added additional training to the standard Stable Diffusion system, linking additional text descriptions about thousands of photos to brain patterns elicited when those photos were observed by participants in brain scan studies.

        © Copyright Original Source


        Glendower: I can call spirits from the vasty deep.
        Hotspur: Why, so can I, or so can any man;
        But will they come when you do call for them? Shakespeare’s Henry IV, Part 1, Act III:

        go with the flow the river knows . . .

        Frank

        I do not know, therefore everything is in pencil.

        Comment


        • #19
          Originally posted by shunyadragon View Post
          There is a significant advance in technology where the images people see can be traced, monitored, and repeated the images using brain scans. What these and other similar advances in AI and other technology achieve is linking consciousness and intelligence to the direct function of the brain.


          Source: https://www.science.org/content/article/ai-re-creates-what-people-see-reading-their-brain-scans



          AI re-creates what people see by reading their brain scans

          A new artificial intelligence system can reconstruct images a person saw based on their brain activity

          7 MAR 2023 BYKAMAL NAHAS
          As neuroscientists struggle to demystify how the human brain converts what our eyes see into mental images, artificial intelligence (AI) has been getting better at mimicking that feat. A recent study, scheduled to be presented at an upcoming computer vision conference, demonstrates that AI can read brain scans and re-create largely realistic versions of images a person has seen. As this technology develops, researchers say, it could have numerous applications, from exploring how various animal species perceive the world to perhaps one day recording human dreams and aiding communication in people with paralysis.

          Many labs have used AI to read brain scans and re-create images a subject has recently seen, such as human faces and photos of landscapes. The new study marks the first time an AI algorithm called Stable Diffusion, developed by a German group and publicly released in 2022, has been used to do this. Stable Diffusion is similar to other text-to-image “generative” AIs such as DALL-E 2 and Midjourney, which produce new images from text prompts after being trained on billions of images associated with text descriptions.

          For the new study, a group in Japan added additional training to the standard Stable Diffusion system, linking additional text descriptions about thousands of photos to brain patterns elicited when those photos were observed by participants in brain scan studies.

          © Copyright Original Source

          cool.


          Comment


          • #20
            One of the complicated problems with relating the physical brain with consciousness and behavior is the actual ability to map this relationship. Science found a way to map this relationship in all animals including humans by studying this relationship in a worm at the simplest level.

            Source: https://scitechdaily.com/neural-navigators-how-mit-cracked-the-code-that-relates-brain-and-behavior-in-a-simple-animal/



            Neural Navigators: How MIT Cracked the Code That Relates Brain and Behavior in a Simple Animal

            MIT researchers model and map how neurons across the tiny brain of a C. elegans worm encode its behaviors, revealing many new insights about the robustness and flexibility of its nervous system

            To understand the intricate relationship between brain activity and behavior, scientists have needed a way to map this relationship for all of the neurons across a whole brain. Thus far this has been an insurmountable challenge. But after inventing new technologies and methods for the purpose, a team of scientists in The Picower Institute for Learning and Memory at MIT has produced a meticulous accounting of the neurons in the tractably tiny brain of a humble C. elegans worm, mapping out how its brain cells encode almost all of its essential behaviors, such as movement and feeding.


            In the journal Cell on August 21, the team presented new brain-wide recordings and a mathematical model that accurately predicts the versatile ways that neurons represent the worm’s behaviors. Applying that model specifically to each cell, the team produced an atlas of how most cells, and the circuits they take part in, encode the animal’s actions. The atlas, therefore, reveals the underlying “logic” of how the worm’s brain produces a sophisticated and flexible repertoire of behaviors, even as its environmental circumstances change.
            Insights from the Research


            “This study provides a global map of how the animal’s nervous system is organized to control behavior,” said senior author Steven Flavell, Associate Professor in MIT’s Department of Brain and Cognitive Sciences. “It shows how the many defined nodes that make up the animal’s nervous system encode precise behavioral features, and how this depends on factors like the animal’s recent experience and current state.”

            Graduate students Jungsoo Kim and Adam Atanas, who each earned their PhDs this spring for the research, are the study’s co-lead authors.

            © Copyright Original Source

            Last edited by shunyadragon; 08-23-2023, 10:17 AM.
            Glendower: I can call spirits from the vasty deep.
            Hotspur: Why, so can I, or so can any man;
            But will they come when you do call for them? Shakespeare’s Henry IV, Part 1, Act III:

            go with the flow the river knows . . .

            Frank

            I do not know, therefore everything is in pencil.

            Comment


            • #21
              Originally posted by shunyadragon View Post
              One of the complicated problems with relating the physical brain with consciousness and behavior is the actual ability to map this relationship. Science found a way to map this relationship in all animals including humans by studying this relationship in a worm at the simplest level.

              Source: https://scitechdaily.com/neural-navigators-how-mit-cracked-the-code-that-relates-brain-and-behavior-in-a-simple-animal/



              Neural Navigators: How MIT Cracked the Code That Relates Brain and Behavior in a Simple Animal

              MIT researchers model and map how neurons across the tiny brain of a C. elegans worm encode its behaviors, revealing many new insights about the robustness and flexibility of its nervous system

              To understand the intricate relationship between brain activity and behavior, scientists have needed a way to map this relationship for all of the neurons across a whole brain. Thus far this has been an insurmountable challenge. But after inventing new technologies and methods for the purpose, a team of scientists in The Picower Institute for Learning and Memory at MIT has produced a meticulous accounting of the neurons in the tractably tiny brain of a humble C. elegans worm, mapping out how its brain cells encode almost all of its essential behaviors, such as movement and feeding.


              In the journal Cell on August 21, the team presented new brain-wide recordings and a mathematical model that accurately predicts the versatile ways that neurons represent the worm’s behaviors. Applying that model specifically to each cell, the team produced an atlas of how most cells, and the circuits they take part in, encode the animal’s actions. The atlas, therefore, reveals the underlying “logic” of how the worm’s brain produces a sophisticated and flexible repertoire of behaviors, even as its environmental circumstances change.
              Insights from the Research


              “This study provides a global map of how the animal’s nervous system is organized to control behavior,” said senior author Steven Flavell, Associate Professor in MIT’s Department of Brain and Cognitive Sciences. “It shows how the many defined nodes that make up the animal’s nervous system encode precise behavioral features, and how this depends on factors like the animal’s recent experience and current state.”

              Graduate students Jungsoo Kim and Adam Atanas, who each earned their PhDs this spring for the research, are the study’s co-lead authors.

              © Copyright Original Source

              Well it only has 302 neurons, so..

              But still interesting.

              Comment


              • #22
                Originally posted by Sparko View Post

                Well it only has 302 neurons, so..

                But still interesting.
                If you read the article carefully it is not the size of the animal or the number of neurons involved. The issue is the process that MIT Cracked the Code That Relates to Brain and Behavior. They were able to do this in a small simple animal, but this process is the same in all animals including humans,
                Glendower: I can call spirits from the vasty deep.
                Hotspur: Why, so can I, or so can any man;
                But will they come when you do call for them? Shakespeare’s Henry IV, Part 1, Act III:

                go with the flow the river knows . . .

                Frank

                I do not know, therefore everything is in pencil.

                Comment

                Related Threads

                Collapse

                Topics Statistics Last Post
                Started by rogue06, 11-28-2023, 06:19 PM
                1 response
                15 views
                1 like
                Last Post rogue06
                by rogue06
                 
                Started by shunyadragon, 11-28-2023, 03:28 PM
                4 responses
                25 views
                0 likes
                Last Post shunyadragon  
                Started by Catholicity, 11-28-2023, 12:14 PM
                24 responses
                148 views
                1 like
                Last Post Diogenes  
                Started by shunyadragon, 11-28-2023, 09:01 AM
                15 responses
                82 views
                0 likes
                Last Post Sparko
                by Sparko
                 
                Started by shunyadragon, 11-17-2023, 11:35 PM
                20 responses
                126 views
                0 likes
                Last Post Diogenes  
                Working...
                X