- GitHub survived the world’s largest DDoS attack ever recorded.
- This amplification attack was using memcached-based technique that reached to 1.35 Terabits per second through 126.9 million packets per second.
On 28 February 2018, GitHub, the most popular code sharing and hosting service for version control, faced the largest DDoS (distributed denial-of-service) attack ever recorded. This made the website go down for about 10 minutes (from 17:21 to 17:30 UTC).
This attack was over twice the size of the Mirai botnet DDoS attack occurred on 20 September 2016. However, there is a big chance that it will not remain the biggest attack for a long period because of memcached reflection capabilities.
In order to better explain what actually happened, we are describing a few fundamental terms.
DDoS Attack- It’s an attempt to make an online service unavailable by overwhelming it with a huge amount of invalid traffic from multiple sources. Specifically, attackers try to overload systems and prevent legitimate requests from being fulfilled. Because the invalid requests come from multiple sources, one can’t stop the attack by simply blocking a single source.
Usually, attackers target a wide range of crucial services and resources, like news websites and payment gateways. Activism, blackmail and revenge can motivate these attacks.
Memcached – It’s a free, open source, high-performance, distributed memory object caching system generally used to speed up dynamic web applications by caching data and objects in RAM. The is done to decrease the number of times an external data source must be read.
It’s an in-memory key-value store for small pieces of arbitrary data (like objects, strings) from results of API calls, database calls, or page rendering.
How The Attack Works?
This attack abuses instances of inadvertently accessible memcached on the public internet running on User Datagram Protocol (UDP). The responses of memcached can be targeted against another address (like ones used to serve GitHub) by IP address spoofing. This sends ridiculously large amount of data towards the target as compared to genuine sources.
This attack was quite unique (worse) – it had an amplification factor of more than 51,000, which means for each single byte sent by an attacker, up to 51 Kilobyte is sent towards the target. The network provider mitigates the attack by filtering all traffic coming from UDP port 11211, default port used by memcached.
The Incident
The attack emerged from more than a thousand autonomous systems across tens of thousands of different endpoints. This amplification attack was using memcached-based technique that reached to 1.35 Terabits per second through 126.9 million packets per second.
Anomaly in the ratio of transit-in and transit-out traffic
Source: GitHub Engineering
Github reported a rapid rise in inbound transit bandwidth (peaked 100 Gigabits per second), and they moved traffic to Akamai, a CDN provider, who provided an additional edge network capacity. Four minutes after the full recovery, they withdrew the routes to internet exchanges to shift an additional 40 Gigabits per second away from their edge.
It all happened in two major phases – the first part of the attack peaked to 1.35 Terabits per second (Tbps). The second part took place at 18:00 UTC and spiked to 400 Gigabits per second.
What’s Next?
In the last couple of years, GitHub has increased their transit capacity to more than double, which has enabled them to withstand these kinds of attacks. And they are continually developing robust peering relationships across a diverse set of exchanges.
Other than GitHub, a few organizations experienced similar attacks, and according to the provider Akamai, there will be more bigger attacks in the near future. Since the initial disclosure, they have seen a big increase in scanning open memcached servers.
Read: 20 Interesting Facts and Statistics About GitHub
The good news is that network distributors can put a limit on traffic coming from UDP port 11211, and stop invalid traffic from entering and exiting the network, but implementing it on a very large scale will take some time.