LinkedIn has been working with multi-CDN’s for more than five years. But now that the whole world is moving to IPv6, LinkedIn is working with the sames CDN’s in moving from IPv4 to IPv6. Internet Protocol Version 6 (IPv6) is the most recent version of the Internet Protocol, which was designed to take over from IPv4 in 2012. It provides an identification and location system for computers on networks and routes traffic across the web.
IPv4 only had 4.3 billion IP addresses and it ran out of them. Meanwhile, IPv6 is capable of 340 undecillion addresses, meaning that every device worldwide can have its own unique public IP address. IPv6 also has heightened security: it encrypts traffic and checks packet integrity to offer VPN-like protection for regular Internet traffic. IPv6 devices have been built using dual stack technology, which allows IPv6 and IPv4 to run concurrently, so IPv4 devices will work for the for-seeable future as organizations gradually transition across to IPv6.
LinkedIn’s Edge Infrastructure
Many organizations are still transitioning across to the next-generation technology, LinkedIn among them. The professional networking service has taken a multi-CDN approach to the move over. LinkedIn works with multiple CDNs to deliver static content, such as the scripts used on its site, along with its member-generated content, such as resumes, profile pics and video uploads. LinkedIn’s content is delivered from edge services closest to the user in question. The Edge SRE team runs one in-house CDN, four external CDNs, three DNS platforms, and all LinkedIn’s PoP.
The Transition to IPv6
Back in July 2017, LinkedIn reached the 50% IPv6 traffic milestone. Over 50% of its pages were accessed over IPv6 from mobile devices in the U.S. The firm is currently focused on enabling IPv6 on its internal networks and applications, aiming to remove IPv4 internally, starting in 2018. The public services and external networks will continue to support IPv4 for the forseeable future, but responding to the performance and security improvements it has seen on IPv6, it is gradually making the transition across.
In a recent blog post, the company discussed why it was transferring its network traffic over from IPv4 to IPv6, including the fact that IPv6 can be quicker than IPv4, particularly on mobile networks (the majority of LinkedIn’s traffic) and the fact that the web is running out of IPv4 addresses.
In 2013, LinkedIn enabled IPv6 dual stack technology on its production mail servers. A year later, they enabled it across all their data centers and CDNs, asides from the CDNs in China. However, they found that not all their CDN partners had sufficient IPv6 coverage, so performance was not as strong as on the IPv4 only networks. Thus, in 2016, they brought on two new CDN partners. They didn’t enable IPv6 straight away as they first wanted to analyze the performance of their dual stack networks and solve any problems they found. LinkedIn “wanted to ensure that there was no negative impact to member experience on the site as a result of us starting to serve content over dual stack networks.”
Pre-Ramp Findings
During the pre-ramp phase, LinkedIn employed a mix of third-party real-user management (RUM) using Cedexis and synthetic monitoring (Catchpoint), and discovered several potential issues over the course of testing:
- Efficient Routing – LinkedIn ensured that incorrect routes were investigated and solved. Members in one particular geography were being sent over IPv6 to distant CDN edges, instead of the geographically closest ones. LinkedIn worked with the CDN partner to rectify its dual stack network maps, and those of its upstream provider, to improve the routing in those instances.
- Network Timing – The Edge SRE team realized it needed to monitor network timing metrics, including DNS, connect, SSL, request and response times, as it evaluated how the shift to IPv6 affected its members. They found, for instance, that one of its CDN partners in India was experiencing DNS resolution issues over IPv6 in a major region. LinkedIn worked with the CDN partner to set up an IPv6-enabled DNS PoP in that region, significantly improving resolution times.
- Optimizing CDN Usage – When LinkedIn found that one of its CDN partners had limited IPv6 coverage on its POPs, it employed RUM-based DNS to direct traffic to other CDN providers and work around these performance issues.
Post-Ramp Findings
LinkedIn gathered desktop RUM data for its post-ramp analysis from its navigation and resource timing APIs (they intend to surface the same data for mobile members next). They employ customs headers on each CDN to be able to link the IP version of client connections to the particular CDN, which allows them to slice the RUM data by client IP version. Their findings:
- In North America and Europe, the performance of IPv6 dual stacked networks was sometimes better than with IPv4 networks, and was generally at a level with it.
- In India, due to limited PoP coverage on from providers, there is room for improvement; however, LinkedIn is able to fallback to IPv4 via Happy Eyeballs (or Fast Failback) when needed to maintain performance as the region transitions across.
- According to APNIC, IPv6 usage in China is still under 2% in 2017. Carriers in China will need to prioritize offering increased IPv6 support.
Overall, LinkedIn says, “As IPv6 adoption continues to grow, we expect performance and availability of IPv6 networks to surpass IPv4.”