Tesla CEO Elon Musk has been public when it comes to his concerns about artificial intelligence becoming evil.
Earlier this year, Musk pledged $10 million to the Future of Life Institute (FLI) for research on artificial intelligence safety. He also signed a FLI letter outlining research priorities for robust and beneficial artificial intelligence, which discusses the need for things like law and ethics research.
"I just think we should cautious about the advent of AI and a lot of the people that I know that are developing AI are too convinced that the only outcomes is good," Musk said during his presentation in Paris over climate talks earlier this month.
"We need to consider potentially less good outcomes, and to be careful and really to monitor what's happening and make sure the public is aware of what's happening," he added.
So when Musk announced the formation of OpenAI, a nonprofit research company with the goal to "advance digital intelligence in the way that is most likely to benefit humanity as a whole," it seemed in-line with his typical stance.
But Sam Altman, Musk's co-chair for OpenAI and president of the startup seed accelerator Y Combinator, doesn't have a fear about AI becoming evil, at least not anytime soon.
"That is so far in the future it's difficult to discuss and focus on," Altman told Tech Insider. "We can imagine near-term negative effects."
When asked to reference a potential near-term negative effect, Altman mentioned genetic programming, which is a machine-learning technique that uses the power of natural selection to find answers to problems. Once set up, the program can run on its own without needing any human intervention.
"When you have a system that can change itself, and write its own program, then you may understand the first version of it," Steve Omohundro, a scientist who specializes in machine learning and programming, told the Daily Beast about genetic programming.
"But it may change itself into something you no longer understand. And so these systems are quite a bit more unpredictable. They are very powerful and there are potential dangers," Omohundro said.
This doesn't mean the program would change itself into some Terminator-styled, evil robot, but rather it could deviate from its original purpose and morph into something its creators no longer understand.
Exploring these kinds of near-term negative effects will be more in-line with OpenAI's mission than the question of evil, sentient robots. OpenAI, which was formally announced last week, is a nonprofit company that has already raised $1 billion toward conducting research that will benefit humanity.
Any patents the group receives will be shared with the public. Backers include Reid Hoffman, co-founder and executive chairman of LinkedIn; Jessica Livingston, a founding partner of Y Combinator during its seed stage; and Greg Brockman, CTO of Stripe.
But when asked if OpenAI has any specific research goals, Altman said that was actively being debated still.
Regardless, Altman is not concerned about evil robots, and noted OpenAI won't be either unless that world somehow becomes a reality.
"If we get to a world where we're worried about artificial intelligence being sentient evil — but we're so far from that sci-fi world the press like to write about," he said.
SEE ALSO: Elon Musk just announced a new artificial intelligence research company