What Actually Scares Him
Luckey: I am worried about the potential of autonomy to do really spooky things — the Skynet scenario, takes over all the weapons and nukes us all. But it's actually near the bottom of a long list of things I'm concerned about.
I'm very concerned about very evil people using very basic AI. I'm much more worried about that than extremely advanced AI wanting to wipe us out on its own. I'm more worried about the bioweapon stuff. I'm more worried about people doing things irrationally in conflicts — they're the hardest to predict and the hardest to counter.
Skynet is near the bottom of my list. Very evil people using very basic AI — that's what scares me.
The Moral High Ground
Luckey: Some people say — how can you work on autonomy when it has so many ethical problems? You think the moral high ground is to wash your hands of it and let people who don't care about those things work on it?
There's no moral high ground in ensuring that less competent, less principled people work on these problems. At least in my view. At Anduril, we always have a human in the loop. There's always a person telling the AI what to do. It's not making any life-or-death decisions without a person who's directly responsible.
There's no moral high ground in ensuring less principled people work on these problems.
The Nuclear Analogy
Luckey: Autonomy is going to be similarly important compared to nuclear power. Imagine if the United States had tried to set international norms around nuclear weapons if we had not had any. It would have been a joke. We never could have done it.
The only way the US can lead is if we're actually involved. That's the only way we could potentially help regulate AI in a way where it doesn't become a really big travesty internationally. I'm spooked about it. But I think we'll manage it.
If the US had tried to set norms on nuclear weapons without having any — it would have been a joke.