Earlier this month, Facebook announced its new Graph Search, which is a system that will be used for searching Facebook’s huge collection of photos, users and ‘liked’ interests. While this sounds like a fantastic concept, how will Facebook power this monster?
According to a Facebook executive last week at the Open Compute Summit in Santa Clara, CA, they will use the Disaggregated Rack. Before we discuss what that is, some background:
Facebook’s servers are used all over the world every second of the day. There are more than one billion users, who had more than 300 million photos per day, with about 220 billion already stored. They ‘like’ and comment 4.2 billion times per day and have established 140 billion friend connections to date. Interestingly, about 80% of users are from outside the US.
This means that the Facebook servers are actually at high demand not just at peak hours US time, but also around the globe. So they are in highest use for about 16 hours per day.
In Europe, people access Facebook at around 8 PM Pacific Time, and traffic peaks at 9 PM and stays up until 4 AM. East Coast traffic peaks between 10 AM and 4 PM PT.
Current Facebook Infrastructure
Facebook’s traffic comes via front end Web clusters, each which has about 12,000 servers. Each rack has 20-40 servers. For serving Web pages, Facebook has about 250 racks, 30 racks of cache, 30 racks of ads, and a few others. Clusters are designed so there is never a bottleneck before the Web server.
Multi-feed server racks power the Wall of the Facebook page. Each rack has a total copy of user activity from the last two days. All 40 of the servers in a rack work as one.
When one of the servers needs to generate your Wall, it chooses a random aggregator, generates your friend lit, and then queries the data ‘leafs’ in parallel. Each leaf then responds with the most recent activity of the friend and the top 30 stories.
Facebook has five sorts of servers – web, hadoop, database, feed and haystack photo. Most servers the company purchases are for Web serving. By limiting the types of servers, pricing competition is maximized among all of the suppliers for Facebook. This allows the firm to repurpose servers easily.
The problem is they do not have as much flexibility, which is an issue as service needs evolve over time. Designers have to be ready for changing needs of the CPU, Ram, disk space, flash capacity and more.
The solution to the problem is the disaggregated rack. Facebook wants to try to build more hardware as far as services provided, but still do more in cost and serviceability. One multi-feed rack requires 80 processors, 5.8 TB of RAM, 80 TB of storage and 30 TB of flash.
Facebook wants to break up each one of these and scale them independently from one another.
To power the new Graph Search, Facebook will use 20 servers, eight flash sleds, two RAM sleds and a storage sled, for a total of 320 CPU cores, three terabytes of RAM and 30 TB of flash. The key is the RAM to flash, which is about 1:10. It is hoped that the ratio will climb to 1:15, as the Search improves in efficiency.
The purpose of the Disaggregated Rack is to push the servers as hard as possible, while cutting down bottlenecks so that they match the application.
Graph Search is not the only new Facebook feature this year, such as this and this, but it promises to be one of the biggest to date.