Using some quick and dirty math, Facebook CTO Mike Schroepfer estimates that the amount of content that Facebook considers putting on your News Feed grows 40% to 50% year-over-year.
But because people aren't gaining more time in the day, the company's algorithms have to be much more selective about what they actually show you.
"We need systems that can help us understand the world and help us filter it better," Schroepfer said at a press event prior to his appearance at the Dublin Web Summit Tuesday morning.
That's why the company's artificial intelligence team (called FAIR) has been hard at work training Facebook's systems to make them understand the world more like humans, through language, images, planning, and prediction.
It already has trained its computer vision system to segment out individual objects from photos and then label them. The company plans to present a paper next month that shows how it can segment images 30 percent faster, using much less training data, than it previously could.
Ultimately, Schroepfer explains, this could have practical applications like helping you search through all your photos to surface any that contain ocean scenes or dogs. Or, you could tell your News Feed that you like seeing pictures with babies, but hate seeing photos of latte art.
It could also come in handy for photo editing. For example, you could tell the system to turn everything in a photo black-and-white, except one object (like the image on the right).
These improving visual skills pair well with Facebook's language recognition.
Schroepfer says that the company is in the early stages of building a product for the 285 million people around the world with low vision capabilities and the 40 million who are blind that will let them communicate with an artificial intelligence system to find out details about what is in any photo on their feed.
"We're getting closer to that magical experience that we’re all hoping for," he says.
The team is also tackling predictive, unsupervised learning and planning.
Making M into a superpower
Both of these research areas will be important to powering M, the virtual personal assistant that Facebook launched earlier this summer in its chat app, Messenger. Right now it's in limited beta in the Bay Area, but the goal, Schroepfer says, is to make it feel like M is a superpower bestowed upon every Messenger user on earth.
Right now, everything M can do is supervised by real human beings. However, those people are backed up by artificial intelligence. Facebook has hooked up its memory networks to M's console to train on the data that it's gotten from its beta testers.
It might sound obvious, but the memory networks have helped M realize what questions to ask first if someone tells M they want to order flowers: "What's your budget?" and "Where do you want them sent?"
The AI system discovered this by watching a handful of interactions between users and the people currently powering M.
"There is already some percentage of responses that are coming straight from the AI, and we're going to increase that percentage over time, so that it allows us to train up these systems," Schroepfer says.
"The reason this is exciting is that it's scalable. We cannot afford to hire operators for the entire world, to be their virtual assistant, but with the right AI technology, we could deploy that for the entire planet, so that everyone in the world would have an automated assistant that helps them manage their own online world. And that ends up being a kind of superpower deployed to the whole world."
Schroepfer says that the team has made a lot of progress over the last year, and plans to accelerate that progress over time.
"The promise I made to all the AI folks that joined us, is that we're going to be the best place to get your work to a billion people as fast as possible."
Watch this video about the FAIR team and its research here:
Join the conversation about this story »
NOW WATCH: We asked a bunch of kids what they think about Facebook