Skip to content
Search

Latest Stories

HELLO FUTURE: Can Artificial Intelligence Dream & Hallucinate?

HELLO FUTURE: Can Artificial Intelligence Dream & Hallucinate?


The Algorithm and the Soul Can AI Dream? Can a machine imagine or even dream? In this episode of HELLO FUTURE with Kevin Cirilli, Dr. Maya Ackerman author of Creative Machines: AI, Art & Us joins Kevin to explore what happens when computers start to hallucinate. From AI image generators that invent worlds to music programs that hear melodies no ones ever written, Ackerman explains how these systems mimic the brains dream state and what that says about us. Together, they ask: when an algorithm creates something that feels emotional, is it a reflection of us or something new being born? Ackerman argues that creative AI is more than code its a mirror showing humanity what it means to dream.

See omnystudio.com/listener for privacy information.

Speaker 1 (00:08):
All right. So the other day, I was on a
walk with one of my colleagues on the National Mall
in Washington, DC. If you've ever been to Washington, in
between the Lincoln Memorial and the Washington Monument as well
as the Capital. So we're walking along and I see
all of these police officers, Capitol police where great people
on horses. That night I had a dream about horses.
And the next day I woke up and I was
driving to go speak at a conference at penn State University,
great school in my alma mater, and I saw horses
in middle of the countryside of Pennsylvania. And then I
went down a YouTube rabbit hole because I googled what
does it mean if I saw horses and then I
have horses in my dreams. Blah blah blah blah blah.
I don't really believe the hokey pokey stuff, but it
led me to this thing of artificial intelligence can dream,
and that blew my mind, which is why I'm so
excited to have my guest back on the show, doctor
Maya Ackerman. She is the author of a great new
book and she studies all of this stuff, and her
great new book is called creative machines, AI art and US.
She's also been someone who has advised and a keynote
speaker with Google, IBM, Oxford, Microsoft, as well as the
United Nations, so she really knows her stuff and everything
that she's talking about. She is the co founder of
Wave AI and she's also Wave ai launched Lyrics Studio,
which she talked about in a previous episode about the
future of music. So I told her I wanted to
have her back on and talk about AI hallucinations and
AI dreams and all of this stuff, and she agreed,
So maya, thanks for coming on. First of all, let's
define some of these terms. What is creativity, what is
an AI hallucination? And what can AI dream? Let's just
all keep it very broad.

Speaker 2 (01:53):
Thank you so much for having me. That's a great
question to start from. Some people are surprised that we
have a fairly crisp definished of creativity. So creativity. Something
is creative if it's novel and valuable, So novel makes sense.
If it's just a copy of something, then then it's
not considered creative generally speaking. And appropriateness or value is
about how fit it is for its particular domain. So
it could be aesthetic value if it's music or art,
and if it's a recipe that it needs to taste good,
et cetera, et cetera. But that gives us a good
guide because if we say that something is creative, if
it's novel or valuable, then we separated from the process.
And so things that machine make are also allowed to
be creative using this definition. And that's really the heart
of it. Now you're already touching on dreams and hallucinations.

Speaker 1 (02:42):
I know, I just get right into the weird stuff.
I can't help it. I'm just like, this is what
got me into this whole futurism stuff is all of
these topics. Who defined creativity? Like, what definition are you
going off of?

Speaker 2 (02:54):
So this is a definition that we've been refining in
academia for several decades.

Speaker 1 (02:58):
Wow. And so the the point that you're making, which
is very important to underscore for the premise of this conversation,
is that machines can be creative if we are.

Speaker 2 (03:09):
Willing to judge them based on output, then yes, for sure.

Speaker 1 (03:14):
But my question is is it the machines output that's
creative or is it our human reaction that we're applying
our humanness creativity to the output of machine generated AI.
Does that make sense.

Speaker 2 (03:29):
Let me tell you a quick story. This is back
from the nineteen eighties meant by the name of David Cope,
who actually unfortunately recently passed away, made one of the
pioneering systems in creative AI. It was called Amy Experiment
in Musical Intelligence, and fellow musicians, professional musicians would get
so impressed with the music. At one point he said
that he found a new, previously undiscovered BAH piece. Wow,
and the music world lost its mind. Wow, it's so beautiful.
This is amazing. And then they find out it was
made by an AI and suddenly they are like, oh,
we could tell all along this is horrible. This is
nowhere near the quality of real Bach. So this sort
of bias, and then there was They came up with
a thing called the Discrimination Test based on it. This
kind of bias, we've had it for decades. If we
don't know it's made by an AI, we like it
a lot better. It's a really important.

Speaker 1 (04:24):
Bias for us to be aware of in this conversation
that is so fascinating. David Cope with The New York
Times in his obituary called him, David Cope, the godfather
of AI music, is dead at eighty three. He's the
godfather of AI music, which is something I didn't even
know existed until talking to you. So that's pretty cool.
Thanks for teaching me that fact today. Okay, So now
that we know what creativity is and that the academic
world has essentially defined it as something that if you
use that definition, which I agree with, sure, I align
that technology can be creative in fact.

Speaker 2 (04:57):
So you know a lot of people feel very intimate
dated by this saying oh no, that means that if
we're not the unique species that's creative, then what's the
point machines are going to be more creative than us.
It's over. And the reason I'm not afraid is kind
of special. That's because I think that we are nowhere
near our creative capabilities. Human beings have barely scratch the
surface of what we're capable of. And the reason I
know this for sure is because of the experience as
people have on psychedelics. I'm not a.

Speaker 1 (05:34):
Fan of drugs, not gonna lie to you, but I
do think that, just like I, this is what I
would say when applied for medicinal reasons, totally work it
out with your doctor. I'm not gonna I'm not a doctor,
so I'm not going to judge anyone. But I know
if we do have younger listeners or there are kids
in the car who are listening to us, listen to
your parents, please, folks. Okay, back to you. Maya with
that caveature, that was important.

Speaker 2 (06:01):
So if you look at people who have those experiences,
they could text to image models to shame. So text
image models were like, Wow, it creates images and in
fractions of a second. Oh no, it's so much more
creative than us. The human brain on psychedelics. Under certain psychedelics,
certain people can generate these images in their minds even
more detailed, even more amazing than Chachikiti in mid journey,
even faster. So these models are not outshrying us. If
we look at human capability more broadly, if you amplify
the capabilities of Ai So and a man by the
name of Alexander in twenty fifteen at Google took a
system that ended up becoming called deep Dream. It was
an image recognition system and he amplified its capabilities and
hallucinogenic images started coming out, and it was called deep dream,
so we can argue that the machine itself hallucinates as well.

Speaker 1 (06:56):
Okay, Like my mind is blown. I'm taking a deep breath.
I'm like, wait a minute, wait a minute, wait a minute.
I don't even know where to start. This is awesome.
I want to keep having this conversation. But I'm like,
this is totally new territory for someone like me who's
used to interviewing senators and governors. And this is why
I love this new life that I live. If AI
is hallucinating, you can study the brain and a brain
under you know, an influence of anything, and we can
I'm assuming. I mean I'm not a brain scientist. I'm
assuming you can study the brain to like understand how
it thinks. But how do you study if AI is
dreaming slash hallucinating? How does a scientist even look at that?

Speaker 2 (07:40):
Yeah, the way that we think and the way that
AI thinks has some commonality. It's not identical as that's
not surprising because we modeled AI after our brain, and
so it has neurons with connections between them. It launs
to data through data the way that we lawrence from
sensory input, and essentially when we go about our line.
Some an you'l says has an amazing YouTube and book
that explains us in more detail. But essentially, we are
constantly predicting, constantly predicting what's gonna happen.

Speaker 1 (08:10):
I'm gonna make that red light. I'm not gonna make
that red light when I'm driving?

Speaker 2 (08:13):
Yeah, exactly right, right, And sometimes we're wrong, and then
we have to update our predictions. And sometimes they're wrong
in really funny ways. Like if you just broke up
with someone, you might think that you see them when
they're not there. That happened to me in college.

Speaker 1 (08:25):
Wait what me? That's crazy? I was thinking you were
going elsewhere of like you Google, what does it mean
if they text me back or they don't. But your
point that you're making is that you're the human is
constantly looking for meaning or certainty or a way to
predict to navigate their life. And you're saying the AI
is modeled like that.

Speaker 2 (08:47):
That's wildly Yeah, they have built exactly like that. And
actually the way that we converted prediction to generation is
that you just have they I agree with itself. You
ask it what do you think is going to be
the next word? And and then it agrees with itself,
and then you ask it to guess what is going
to be the next word, and again you have it
agree with itself. So prediction and generation are essentially the
same thing for the computer, and in many ways it's
the same thing for us as well.

Speaker 1 (09:14):
Politicians are real good at agreeing with themselves to keep going. Sorry,
all have its diehard. See I get to knock my
old mainstream media political nonsense world because it's but you're right,
I mean literally that That actually was my AHA moment today.
What you just said was they all agree with themselves
and they just generate a cacophony of the sameness. So okay,
so if now get to the AI hallucinates.

Speaker 2 (09:41):
When AI hallucinates, that means that it's sort of being
true to its original self. You know how children are
super super imaginative. Yeah, so sort of the original AI
is kind of like a kid. It's making stuff up.
It's really really, really good at being imaginative. And then
comes side and sci fi says that the AI is
supposed to be an all knowing oracle, true all the time,
and so open AI and Google and Microsoft desperately desperately
trying to align this imaginative being into telling us the truth.
There is no the truth. You're talking about politics here.
Humans can't agree with among themselves what the truth is.

Speaker 1 (10:22):
Didn't even agree where to go for dinner. Yeah, yeah, exactly.

Speaker 2 (10:26):
And so this desperate attempt to turn this imaginative machine
into an all known oracle is partially successful, I admit,
but also ultimately keeps failing because the AI keeps hallucinating,
not because it's evil, not because it's even wrong in
its kind of like absolute sense, but because it was
built to imagine, and we stand to gain a lot
from letting the AI imagine and having it be creative,
being imaginative together with us.

Speaker 1 (10:54):
Okay, So there are people who are listening who are like,
what so, so I'm going to represent Like okay, so
give me an example of, like, define an AI hallucination,
not in like the Google dream thing psychedelic image sets.
What is like a tangible for the everyday person who's
using their favorite AI platform. What is an example of
an AI hallucination?

Speaker 2 (11:18):
So? Okay, The way that I look at it and
the way that it's commonly defined is different. The common
definition is that it's ae AI. When an AI gives
you some piece of information that is not factually correct,
that is, that is how it has come to be known, unfortunately.

Speaker 1 (11:32):
But is that an error that to me just sounds
like a mistake. Like, what's the difference between a hallucination
and a mistake? Because the Internet has a lot of mistakes.

Speaker 2 (11:42):
Well, let me tell you the difference is not. In
some contexts, the idea of a mistake is much more flexible.
So when you're looking at lyrics writing, when you're looking
at image generation and you're looking at storytelling, being imaginative
and being allowed to make mistakes is the same thing, right, Like,
there is no there's almost no such thing of the
wrong lyrics when you're writing lyrics, And if you find
tune your machine, if you find tune your machine, do
never make mistakes. It's because it's gonna become terrible with
lyrics writing. It's gonna kind of keep spitting out the
same stuff.

Speaker 1 (12:10):
But I don't want to to hallucinate. We could say
something as sophisticated as the percentages of an asteroid hitting
the planet, or we could be as mundane as whether
or not as should wear a jacket because it's cold
outside and it's the weather or what the stock market's doing,
you know, like, is that like that to me feels
like an error and not a hallucination. Does that make sense?

Speaker 2 (12:28):
Oh, we kind of treat people the same way, right Like,
if somebody is being imaginative about the stock market, maybe
we're not as appreciative of that. My point is not
when you use the same system and you don't have
to use the same system for that, but right now
we do. Right now, la lins Large language models are
used for both you know, doing imaginative stuff and for
factor A treble. The less mistakes it makes, the less
imaginative imaginative it's going to be. And if we choke
out all the creativity out of it and we can't
only overget correct things, it's going to be completely useless
for anything that requires any amount of creativity, like even
writing a story, you know, even writing an essay. All
of this requires some creativity. And if you want to
keep some creativity, it's going to make mistakes, just like
people make mistakes if you let them be creative.

Speaker 1 (13:13):
And you just said something really big, which is that
there's different systems for different applications, and that right now
all of us are using llms right it would.

Speaker 2 (13:23):
Be nice if there were a different systems.

Speaker 1 (13:25):
So there's not so right now, It's essentially like we're
all different athletes trying to play different sports on the
same exact or some of us might be trying to
play football on a bicycle, or some of us might
be trying to swim on a bike. And exactly what
you're saying, I think because I'm not I have no PhD.
You do. But what I'm what I think you're saying
is that maybe the stock market should use potentially a
different system of AI, and that artists should use, you know,
a painting vert for lack of a better word, eat
in creative version of AI, and that the way that
the systems are designed, these AI systems, it might be
helpful if there are different types, not different brands, but
literally like different instruments. I'm gonna use cars as the analogy,
like a sports car for this thing, or an off
roading you know, Bronco or whatever to go off roading.
Am I making?

Speaker 2 (14:21):
That's exactly it. So I mean we have a system
called MID will not be there exists a system called
MID Journey, which is great for imaginative visuals stuff is
much more creative, So a lot of artists use it
to get inspiration, whereas chagipt. Again, because of this kind
of all one oracle approach, it's a lot less creative
in its images because it tries so hard to be accurate. Right,
So if you want accurate information, I'm not saying that
CHAT is a way to go necessarily, but that's that's
where it's makers are trying to take it really hard.
If you want to write lyrics, if you want to
write imaginative lyrics, if you want to do it in
a way that expresses you. We have a system. In
this case, it's a real wea AI called the studio,
So it does make sense to use the tool for
the purpose that you're trying to go for.

Speaker 1 (15:05):
That's really interesting, and it gets to this broader point
of what I like is that I just felt more
empowered as a human. You're saying, like, we can build
these AI systems or these AI cars. I kind of
like the analogy of cars the most because people kind
of get that. I think we can use that as
a way to design it. And if you want to
be more creative, then you can work to create the
AI car that is more creative. Kind of like picking
a college almost at a university, you know, And I
think building that model is fascinating to me. So you're
saying we should maybe lean into that. And in that case,
maybe I do want to be friends with the AI
model that can hallucinate and duram.

Speaker 2 (15:52):
The one that expectations are aligned when you know that
your AI is now allowed to be creative all of
the stuff that you can create eight when you lean
into the creative capabilities of these machines. Yeah, it's really
incredible that art, the music that people are making.

Speaker 1 (16:08):
What if people said, oh, you can only make art
with pencils, you can only paint in black and white.
I mean doctor Maya Ackerman and her new book that's
coming out Created or that's out Created Machines AI R
and not. She say, well, wait a minute, AI's like
painting in different colors, colors that we don't even know
has existed yet. Okay, maybe we should re examine that.
And these artists in the world, I think we should
listen to them because honestly, they're probably figuring this stuff
out a lot more than we are, a lot faster
than we are. And maybe next time my AI makes
a mistake I'm gonna say, you know what, I hope
you had a good dream. I hope you had a
good hope. I hope you slept well instead instead. Right now,
I'm not gonna lie to you. I'm like, what the heck?
How did you miss that mistake? I should go to
groc Groc. I'm going back to chat chip, chachip. I'm
going back to Roc because you're trying to figure it
out anyway. Anything else you want to add, This has
been an awesome conversation. I can't thank you enough. I
learned a lot. Thank you.

Speaker 2 (17:00):
One final thing. I think this tendency of machines to
imagine and hallucinate is not only useful for artists, but
it's also an important thing to keep in mind when
we use machines for sort of the way that open
AI and the likes want us to use them, The
fact that they can't help but hallucinate means that we
need to stay critical of their output. Not don't mean
it in a kind of like hateful way, just in
a realistic, rounded way. Don't blind the belief. What the
AI says chests the information, and when your expectations are
set correctly, The experience with the AI becomes much more
positive regardless of what you use it for and have fun, enjoy.

Speaker 1 (17:37):
Yeah, it is. It's really cool. He thank you so much.
I mean, what I really really respect about what the
work that you're doing is that people like me can
ask you really crazy questions and you are coming at
it from a very fact based, studious perspective. You've advised
all these companies, you've spoken at them, but at the
end of the day, you're talking about what the questions.
You're answering the questions that we all have, which is
like is AI getting in the booth with Kendrick Lamar
or bad Bunny. At the same time, you're also answering
can AI be on a hallucindgenic which is just crazy.
But it's the fact that there are actual scientists and
academics like you who are thinking through this kind of
makes me feel a lot better. So thank you.

Speaker 2 (18:18):
It was a real pleasure. Thank you for having me

More For You