Amazon Web Services (AWS) is the world’s largest provider of internet-based computing services, and their simple storage service, known as S3, is widely relied upon. Amazon S3 hosts images and files for over one hundred thousand websites and apps. That’s why a recent four-hour outage nearly brought the web to a halt.
The average webpage’s content is hosted from different services and locations, so when a particular service goes down, parts of that webpage often become unavailable. But when so many webpages rely on one service to host integral parts of their operation, and that service goes down, myriad issues can start cropping up.
That’s what happened when S3 went down yesterday. Many websites use S3 to host their images and data, and as such, many websites and apps, like Netflix, Medium, and Slack, found their images weren’t loading, among other problems. Back in October 2016, a botnet attacked the domain name service (DNS) provider Dyn, and effectively shut down a large chunk of the East Coast internet for several hours. This was because, like with S3, hundreds of thousands of websites relied solely on Dyn’s services.
As of this writing, it is unknown whether the S3 issue was caused by an attack, with Amazon only stating that they were experiencing “high error rates.” So even when things are running “normally” (bugs are an unavoidable aspect of running any software, like the recent Cloudbleed incident), there is still a need for more decentralization. S3 isn’t entirely centralized, with all their files hosted on a massive hard drive or anything; other parts of the world had no issue accessing their S3 content. But when the service to entire regions of a country can go down, that’s still too big a single point of failure.
When S3 was experiencing errors, the AWS Service Health Dashboard showed green checkmark images across the board, indicating everything was running smoothly, even though it certainly wasn’t. This is because the Service Health Dashboard hosts the red X images (meant to replace the green checkmarks when everything is NOT fine) on S3, the same service that went down. Since S3 hosts its own health report, it obviously couldn’t let anyone know it wasn’t working properly. It’s this exact type of centralized redundancy that can be solved through decentralization.
Events like these recent service availability issues are practically advertisements for competing decentralized services like InterPlanetary File System (IPFS) and Swarm. IPFS is designed to replace HTTP, the Hypertext Transfer Protocol that powers the World Wide Web. With IPFS, instead of your computer searching for a specific server that hosts the content you’re looking for, it just searches for the content directly. Due to IPFS’s decentralized manner of storing data, a server could go down without affecting a user’s ability to retrieve a file stored on IPFS. This reliability is what makes a decentralized content delivery network (CDN) so ideal.
Swarm is Ethereum’s version of IPFS, acting as a decentralized CDN, but with a built in rewards system as well. While IPFS has Filecoin to incentivize data storage, Swarm will use Ether. Swarm is not only a CDN, but a distributed storage platform as well. Swarm plans to do this by breaking up files into small chunks of data, and then distributing those chunks across its network of nodes. Those chunks will be recompiled back into a file via the Ethereum Name Service, which will act as a DNS for Ethereum. By linking human-readable file names to their cryptographically secured, decentralized chunks of data, a file is safely stored in the cloud until it is called upon.
The more the web becomes decentralized, the more reliable it will be. Centralization generally leads to a single point of failure, which is not ideal for any system that requires high availability. Decentralization will help to make online services more resilient against issues or attacks. It will be the way of the future, as Web 3.0 pushes away from centralization and embraces a more distribution-based approach to offering services.