Skip to content
Search

Latest Stories

HELLO FUTURE: AI, War & the Economy: How the Future Is Being Rewritten Right Now

HELLO FUTURE: AI, War & the Economy: How the Future Is Being Rewritten Right Now


Artificial intelligence is rapidly reshaping the battlefield.

In this episode, Kevin Cirilli speaks with Dr. Craig Albert of Augusta University about how AI systems helped accelerate targeting, intelligence analysis, and battlefield decision-making during the recent U.S.–Israeli campaign against Iran, known as Operation Epic Fury. From advanced sensor-fusion platforms to large language models used in intelligence workflows, AI played a growing role in helping commanders process vast amounts of data and act faster than ever before.


Dr. Albert explains how tools from companies like Palantir and emerging AI models are transforming modern warfare—speeding up intelligence fusion, improving precision targeting, and raising new strategic and ethical questions for military leaders.

What happens when algorithms start shaping decisions in war? And how will artificial intelligence change the balance of power in future conflicts?

Kevin and Dr. Albert break down the technology, the risks, and what this moment reveals about the next era of warfare.

Meet The Future: https://mtf.tv/

See omnystudio.com/listener for privacy information.

Speaker 1 (00:07):
The future of war has arrived. This era of conflict happening,
really breaking out all around the world, the first era
of conflict with artificial intelligence being deployed. Hello Future, It's
me keV. This is a dispatch from the Digital Frontier.
The year is twenty twenty six. The planet is Planet Earth.
My name is Kevin Sirilli. Remember you can listen to
all of the latest episodes of Hello Future on your
iHeartMedia app, and check out MTF dot tv to subscribe
to our newsletter. We're talking all about Operation Epic Fury.
We're talking all about this age of conflict with artificial intelligence.
And my guest today is doctor Craig Albert, who is
way smarter on AI and conflict than I will ever be.
He's out of Augusta University. He's got a new analysis
examining how AI technologies are being deployed in the workflows
of con He's got his PhD and is the graduate
director of the PhD program at Augusta University and Intelligence, Defense,
and cybersecurity policy. He's got his Masters and Intelligence and
security studies. I mean, did you just never stop studying, Craig?
I mean, what how would you describe your area of expertise.

Speaker 2 (01:19):
Yeah, it's twofold. The first one actually was ethnic conflicts.
So I started my studies doing international ethnic conflict, focusing
on fund places and fun things like the Sudan War,
Cheeshnia and Russia, the Kurds in I rocked, you know,
real uplifting types of topics. And then since I'm located
here in Augusta, Georgia, which for your listeners that they
don't know, is home to NSA Georgia US Army Cyber
Command has a significant function here. So with that opportunity,
I shifted my focus into intelligence and cybersecurity policy, which
was really understudied in international security studies.

Speaker 3 (01:57):
And so I saw a real.

Speaker 2 (01:58):
Gap in the literature and a big gap for American
national security interests as well within this genre, and so
I said, okay, time to make the switch.

Speaker 1 (02:06):
Earlier this morning, I was talking to a source who
also specializes in the artificial intelligence being deployed in conflicts,
and in many ways, we've never been where we are before.
How would you describe this moment historically? Speaking with the
advent of AI being deployed in conflict, this.

Speaker 2 (02:28):
Is the new error of the revolutionary of military affairs,
the revolution, and military affairs. So a lot of people
are comparing this to the advent of the nuclear weapon.
I think it's more akin to the advent of air
superiority and the invention invention of air dominance, something like
you saw with the United States air campaign against Serbia
over Kosovo in the late nineties. I think it's akin
to that, but much more sophisticated because AI assisted warfare
now takes the human casualty concerned and out of the
conflict or kinetic operation concern and so now you're more
likely to be able to engage in combat without having
humans on the ground or a lot of intelligence analysts
for instance, being involved in it. So it increases your
chances to engage in conflict, which which is way more
than the error or nuclear era did.

Speaker 1 (03:19):
So I hear you on that front. But it also
feels like there's this new topography that has to be secure,
that has to be protected, and that's the digital domain.
How should people be thinking about that? Because I remember
when I was growing up and outside of Philadelphia, you know,
we were taught one of the things that kept America
is safe during World War Two was we had the
big Pond between US and Europe the Atlantic Ocean, and
in terms of China, there was a bigger pond, the
Pacific Ocean. But now it feels like the battle exists
in the palm of your hand and your smartphone, or
it exists in this ability on the cyberfront. I was
just on TMZ, it was in my algorithm, and they
had an article up that Iran could send a drone
to California. I mean, and I'm thinking to myself, you know,
obviously that's not going to take out California or the
United States. At the same time, psychologically, it would raise
a lot of concerns about our ability to protect ourselves
from rogue digital actors. All of these systems, whether it's Claude,
whether it's you know, anthropics been in the headlines, but Palenteer,
all of these systems are so now intertwined with the conflict.
I think it's really hard for people like me to
understand Craig And so, how should we be thinking about
this and what question should we be asking our elected
officials as we navigate these unprecedented times.

Speaker 2 (04:45):
That's a great set of questions here. There's a lot
to parse out there. How should we think about AI
is first of all, everybody's information is constantly being combed through,
sifted through, and collected at all times by every bit
of internet activities, signs activity, radio wave frequency activity.

Speaker 3 (05:02):
That you're involved in.

Speaker 2 (05:04):
So one should always be you know, I gave a
talk this week and somebody said, you make as paranoid
that everything's always being collected on us, Like what can
we do?

Speaker 3 (05:13):
And my response is, you can't do anything.

Speaker 2 (05:15):
What you need to recognize is that is the fact
that everything's being constantly collected on you and for different reasons.
And so AI is a part of every aspect of
our lives right now, and that, unfortunately, right now means
it's also in the hands of nation state adversaries to
the United States, and so being aware of it is
the first step to how to protect yourself with it.

Speaker 3 (05:35):
Right.

Speaker 1 (05:36):
For the parents listening out there, what can they do
to protect their kids?

Speaker 3 (05:39):
Yeah, that's a great question.

Speaker 2 (05:40):
First, never show photos online, right, that's very important. Never
give names of your children or your family members, things
of that nature. Freeze their credit reports, even though there
might be babies. Right, So that Iran, for instance, this
is the way these rogue actors, these nation state actors, operate.
They can target you using cyber criminals, right, so there
state affiliated cyber criminals. So in Russia, China, Iran, North Korea,
nonstate actors like the ISIS calipate will all engage in
social engineering to try to get you to click something
so that they then can access to your bank accounts.

Speaker 3 (06:15):
And that's a psychological operation.

Speaker 2 (06:17):
Just knowing that you've been a victim of identity fraud
for instance, isn't just for them to gain financial leverage
over you. It's for the hurt impact. It's for the
psychological factor. So always protecting your family with little things
like credit freezes will help secure you in this AI age.

Speaker 3 (06:33):
Social media. I don't think we should be.

Speaker 1 (06:35):
On it social media, I really do.

Speaker 3 (06:38):
It's terrible.

Speaker 1 (06:39):
I say to people all the time. You want to
get people off of social media, get them a dog,
because I'm telling you you got to go on walks
when it's snowing like it is here in DC. I
know this will come out when it'll be bright and sunny,
but I mean, we've been talking a lot about that
on this show. Because your message is very real, Craig,
but it is a little depressing. So I would like
you to tell me why I should be hopeful, because
when I'm reading TMZ and they're telling me a drone
attack that happened in California, or I'm reading you know,
all of these different reports, and there's been horrible, horrible
stories that have come from this conflict. But why should
we be hopeful?

Speaker 3 (07:15):
Oh that's a good question.

Speaker 1 (07:17):
Come on, Craig, you gotta have something for us.

Speaker 2 (07:20):
Yeah, well, I think you should be hopeful because there
are thousands of smart, intelligent, hard work and warriors across
the world trying to protect the United States, trying to
protect NATO, trying to protect just humanity from people that
want to do bad things to good people. And so
that should give you some rest fit at night to
be able to relax the AI.

Speaker 1 (07:42):
But to push you on this is the AI is.
I mean, I'm not a dooming gloomer when it comes
to technology. Anyone who listens to this program, I always say,
the technology capabilities that the military and many times has
been ahead of in terms of the private sector, but
then is deployed in the private sector has unleashed America's
greatest export, which is this innovation. Does the AI have
the ability to keep us safer? I mean, we know
about all it takes is one mistake for there to
be something that goes wrong. But does it also have
the ability to make us safer.

Speaker 2 (08:10):
Well because of its data collection and intelligence and analysis capabilities. Absolutely,
so it's much more likely to understand threats. Let's take
an example from social media. So ISIS operates on social media, right?
Al Qaeda has, Yes, they have social media handles that
people try to take down, but they always reappear. Claud
and Project Maven right are able to gather information on
that instantaneously, on Telegram for instance, and even on if
you might not believe it, but terrorists communicate, for instance,
via Xbox Chat, so they play these video games, right,
and they engage in terrorist conversations on Xbox PlayStation, all
those platforms. What Cloud is doing or can do, is
it can incorporate and code all that language, all that
chatter that's going on in real time. And if the
military has access to that, which the Trump administration is
trying to do. That's what the fight is about right
now with Anthropic and Claud is how much much access
should the Trump administration be able to have and how
can it use it? So if it's allowed to scrape
social media, for instance, you'd be able to tell, well,
there's a terrorist saying I have no privileged information. I'm
giving you a hypothetical right now. So if terrorists chats
something on Xbox saying Philadelphia, for instance, claud would be
able to instantly gather that intelligence and send it off
to the appropriate intel analysts, and then the analysts would
be able to decipher whether that's an actual threat or not,
or if it was just some kids saying something silly
that a kid shouldn't be saying or exposed to on
a video game. And so the AI defense does give
us support and what it's capable of. But and always
teach this, especially for political philosophy students as well. With
security in a democracy and a democratic republic, you lessen
one's rights and freedoms and responsibilities. Right, So, the more
you're secure with these platforms, the less freedom you have,
and in this context it means the less privacy you have.
So if you want the United States government to be
able to source code and scrape from social media, then
you're going to realize that other nation state actors are
doing that to the United States as citizens as well.
And so we're losing a bit of freedom and privacy
and security with the greater security and protection of being
able to prevent attacks against us.

Speaker 1 (10:18):
That's with this moment in time. And we've been here
before as a country. I mean I remember growing up,
I was in middle school after September eleventh, and even
just Americans experience going to airports change and you know,
the Freedom of Information Act, and it feels like we're
in that moment again, but this time it's on the cyberfront.
I just think that culturally, and you know, you being
at Humanities and Social Sciences and the College of the Arts,
I think culturally we haven't had an effective dialogue about
any of this. You know, I was saying in a
previous episode just about we have a driver's license. Aged
you get a driver's license age eighteen, you can join
the military, age twenty one. You are legally able to
get drink alcohol. But we don't really have those same
cultural touchstone moments when it comes to engaging on the
digital front here. I think we need them. I mean,
I hardly am I trying to opine now, and I'm
not talking in the political realm at all. I'm walking
in like the human to human societal bucket. How do
you give me a historical or it doesn't have to
be historical, but what parallel should we be drawing from
culturally as this technology is being deployed, because we made
a lot of mistakes with the Internet. The Internet did
a lot of things right, but we made a lot
of mistakes. And I would argue the mental health crisis
in this country, you know. And I'm not here to
point fingers at anyone, but I think we all need
to be looking at how do we not make those
same mistakes again with social media. I joke to one
of my friends, I mean, it's kind of like when
they smoke on airplanes. I mean, in the future, we're
gonna be like, we let kids roll on these social
media apps and get addicted to these to this stuff.
And you know in Congress in Washington where both parties
were sleep behind the wheel and it's crazy. So how
do you view that? You know? And I'm asking you
very open ended questions because I want you to take
it anywhere you want to go.

Speaker 2 (12:20):
Oh, this is great, it's fantastic. This is the conversation
that we need to be having much more. I think
the idea is akin to like when the printing press
was established, right, So now, or if you allow me
to get religious for a second, when the.

Speaker 3 (12:36):
Church, the Catholic Church approved.

Speaker 2 (12:38):
The transition from Latin to monocular languages with the Bible, right,
so now you God, yes, So you had all these
people that all of a sudden at one time did
not have access to the Bible right or to books right,
and now all of a sudden they do have access
to this.

Speaker 3 (12:54):
So the idea is, what do you do now? How
do you train society?

Speaker 2 (12:57):
And I don't mean that from an elitist standpoint, from
the Jeffersonian standpoint who said Americans need to be trained
in virtue to understand how to run a democratic republic.
We need the same type of thing for creative cognitive reasoning.
Now with social media and the Internet age and AI
overall is we have the access to all this information,
We have the access to social media. How do we
groom our society to know what to do with this
information and what ought to be done with this information.

Speaker 3 (13:24):
I'll quote another famous.

Speaker 2 (13:26):
Philosopher here, Jeff Goldbloom I believe is his name from
Jurassic Park.

Speaker 3 (13:30):
Factor. Right, So, if you've seen the original.

Speaker 1 (13:32):
Actors it's all Jeff Goldbloom, by the way, great movie,
great reference. Yes, amazing.

Speaker 2 (13:40):
There's that one line right where Jeff Goldbloom, I forget
his character's name, but it is in the helicopter and
they're going to the island the first Jurassic Park, the original,
the old g Gangster one, right, and he says, scientists
or society is so busy and so consumed. We're thinking
about can we do something that we never sit back
and consider, ought we or should we do something? And
that's the where I think we should be thinking about
AI assisted technology, A assisted warfare, and social media and
online presence overall. Because yes, we have the capability of
micro targeting and digital surveillance capitalism. We can sell our
identities collected identities to third party vendors to make our
lives easier and better for advertising and to get things to.

Speaker 3 (14:21):
Our doorstep easier.

Speaker 2 (14:22):
But should we be doing all this doesn't matter if
Amazon gets us delivering three days for the amount of
product that we become in that transaction. And so I
think that's the type of conversation we should be having
if you think philosophically or literary about this.

Speaker 1 (14:37):
Doctor Ian Malcolm A. Fictional character from Jurassic Park we trade,
portrayed by Jeff Goldbloom. Really appreciate you coming on. You're
going to have to come back on the program to
continue to explain all these issues to us, and I'll
give you the final word.

Speaker 2 (14:54):
Yeah, if you're a parent, you're worried about this stuff
affecting your kids, make sure they read classic texts, not
on an electronic device. And if they get an hour
of online time, make sure they get you know, thirty
minutes of actual reading time that's not online. It does
a lot to you synapses in your brain and helps
combat dismiss and malinformation.

Speaker 1 (15:13):
Doctor Craig Albert, the guy with all of the degrees
down at Augusta University, really one of the nation's leading
experts in terms of artificial intelligence being deployed in conflict
in the future of modern conflict. Really appreciate your time
to look forward to talking with you in the future.

Speaker 3 (15:30):
Thank you very much,

More For You