AI Made Friendly HERE

The uncanny resemblance between dreams and generative AI

The graduation ceremony at an Ivy League university was about to begin. The hall was packed with students and their relatives. A girl in her late 20s stepped onto the dais. She was dressed in a black graduation gown and cap. Pointing toward me, she called out loudly, “Lavazza.” I was taken by surprise, not only because she singled me out in the vast gathering but also because I had no recollection of how I ended up in that hall.

The ceremony soon commenced. Instead of focusing on the students coming to collect their certificates on stage, my attention was inexplicably drawn to a man standing in the right corner of the stage. He seemed too old to be a student, possibly in his late 30s. He was tall, well-built with broad shoulders, and had a bony facial structure with wide jaws. Behind his right shoulder stood a child who appeared to be around 10 years old. The child gradually began climbing the man’s back, scratching as he ascended. Eventually, the child reached the man’s shoulders. A dark green cloth covered the child’s body. He continued climbing higher until I realized it was all a dream.

I went back to sleep and forgot about the dream. The next morning, I went about my usual routine—brushing my teeth and going for a jog. While scrolling through my phone, I read about OpenAI’s video-generating application called Sora. This application can create videos from textual descriptions. For example, it can generate a video of “…a 3-year-old kid playing with a dog and three puppies in the backyard garden on a sunny day.” Suddenly, my strange dream from the previous night came to mind. As I recalled the entire dream, I noticed an uncanny resemblance between how dreams are generated and how generative AI like Sora works.

Generative AI is powered by large language models (LLMs). These models are trained on extensive datasets, enabling them to predict the next word, sound byte or video frame. With sophisticated algorithms, LLMs can predict the next token due to their vast repository of information. A token is a small unit of text data that AI models can process. Dreaming seems quite similar. Based on our memories, the dream generates the next visual frame on the go, utilizing the large repository of memories in our minds. My dream, for instance, might have drawn from several videos of graduation ceremonies I had seen on social media. While there was no explicit input text for my dream’s setting, the visuals felt generated in real time as I watched. It wasn’t like watching an event that had already happened or was being telecast live; it felt like my mind was creating it while I was watching. Yet, I was unaware of this fact while dreaming. The dream felt vivid, although not as real as reality, but my reactions felt genuine. The surprise when the girl on stage called out “Lavazza” or my confusion at seeing the child climb the tall man’s back felt authentic.

Dreaming and Generative AI seem to generate frames in a way that maintains broad consistency and logical coherence, but they also tend to produce hallucinations or factually incorrect and logically inconsistent information. For example, finding myself at a graduation ceremony watching a child climb the back of an unknown man on stage doesn’t make sense and is unlikely to happen in real life. Similarly, Generative AI throws up factually incorrect information. In recent times some lawyers have been reprimanded by the courts for submitting bogus case laws in their legal briefs which were generated by ChatGPT.

In the 1970s, Harvard psychiatrists Allan Hobson and Robert McCarley proposed the Activation-Synthesis Hypothesis of dreams. According to this theory, dreams result from the brain’s attempt to make sense of random neural activity during REM sleep. The brain synthesizes this activity into a coherent narrative, even if the underlying neural signals are essentially random and meaningless. Similarly, generative AI seems to predict the next token in a series to maintain a coherent narrative and sometimes it generates pure gibberish like some of our non-sensical dreams. 

Since dreams are generated from our subconscious mind, generative AI can be compared to it in certain ways. This raises important questions: Can AI be conscious in the future? Is the subconscious mind a stepping stone to the evolution of a conscious mind? After all, the animals from which humans evolved are not conscious in the same way we are; they operate more subconsciously. Alternatively, consciousness might not develop in a bottom-up manner, suggesting AI might never achieve consciousness through this approach.

AI offers a fascinating opportunity to understand our minds better. Technology has always provided analogies to help us comprehend how our minds work. We used to compare the mind to machines that output responses to inputs. Then, we used computers to describe information processing in our minds. Now, AI serves as a more sophisticated analogy for understanding our minds. Whether we will achieve Artificial General Intelligence (AGI) remains to be seen, but as AI technology advances, our understanding of the mind will undoubtedly get better.

Facebook
Twitter
Linkedin
Email

Disclaimer

Views expressed above are the author’s own.

END OF ARTICLE

Originally Appeared Here

You May Also Like

About the Author:

Early Bird