Is it time to scrap HTTP?

I stumbled on an initiative that wants to replace HTTP. They go so far as to call HTTP ”obsolete” and line up reasons that make it hard to disagree. It’s about a completely new protocol called IPFS, which is a acronym for the unbelievably cool “Interplanetary File System”, and it’s built from the ground up to replace the current technology we use for the web.

Läs den här bloggposten på svenska.

What is the actual problem with HTTP? Of course we use it to distribute billions of webpages daily, and there are few users who think the web works poorly, unless they are unhappy with their broadband connection. But behind the scenes, there is a lot of grumbling. HTTP is built on a structure that was a general truth before peer-to-peer took the net by storm, mainly that information is available in one central repository. A webpage is located on a server, and when the webpage is deleted from the server, the webpage also disappears from the internet. This is something all internet users recognize, the number of 404 messages you get from the web increases exponentially with the age of the links you click on. If you are lucky, you can find a mirror of a lost website in some archive, but often the information is gone. And the extra important thing here is that the information can disappear by different means, but users don’t have a way to get around the information owner’s decision to delete the site. Yes, 404 (Not Found) is such a common sight in our browsers that there is an entire culture around creating the best 404-pages, with creative photos, animations and jokes. But does it have to be so?

Napster popularized the peer-to-peer technique, and opened the doors to the next generation’s information distribution. Then came Bittorrent and make the p2p technology even smarter and easier to use, and now we may be able to say that the time has run out for the older centralized protocols like HTTP.

Distributed networks are smart networks

Think of a web where all documents – regardless of whether it contains HTML, pictures or other resources needed to show the webpage – lie distributed over the network instead of centralized on one server. This would mean that even if a document was deleted on the server where it was first published, the content could still be reached on the net, as long as one of the distributors in the network retained the file. In the same way as your Bittorrent client looks for identical files in your entire network based on its ”hash”, IPFS allows your browser to download the files to render a webpage from all the computers participating in the distributed network.

Through sharing the data in the network, one can also let users of the network participate and share the cost for the distribution. YouTube shovels large amounts of data every day to service the entire internet with videos. Every user who watches a film on YouTube must get the video data directly from one of YouTube’s servers. The same data must travel all the way through the internet many times, regardless of whether you are watching a film your neighbor watched five seconds earlier. With peer-to-peer technology, the distributed network becomes further accessible by making it possible to retrieve parts of data from many different sources.

IPFS is federated

So then what happens with IPFS? Here, a federated model is used instead, like a p2p network. Every file sent out on the network gets its own cryptographic hash, and all clients no longer ask for a file name of a particular domain, but instead the hash that matches the file to be downloaded. As long as the file is not changed, the hash will be unchanged, so it doesn’t matter where it actually comes from – it is unimportant, since we only care if the right file is downloaded. If, therefore, the file is changed at the source, it must be reissued with a new hash. But, now the old file can, however, remain on the network, as long as those who saved it earlier choose to keep it. This also means that there will be a possibility to find older versions of a file on the net, as long as you have access to the right hash of the version you’re looking for.

Today many large websites’ use content distribution networks, so-called CDN, to share their contents over different servers in the world. The idea is that if you sit in Europe and surf on a site, the content can be retrieved by a server closer to you than that of a user who sits in North America or Asia. In addition, every resource gets a broader distribution so that it’s no longer dependent on a single server. IPFS works as a gigantic CDN and makes the concept of CDN meaningless. You who participate in IPFS can instead choose which content you want to help distribute in a federated way. The idea is that when enough people participate in the distribution, the redundancy for content and the amount of data to be saved will make it totally superior to today’s centralized or decentralized solutions. The data in itself is protected from all possible attacks or accidents. If a server goes down, if an entire server hall burns up or an authority confiscates a webpage’s content, nothing more will happen besides that copy of the file disappearing – it will remain in other nodes in the network that have chosen to save it. The means of access will not change either – even if the original domain name is seized, the file can be found by the right hash.

The future is federated

In the long term, this will of course be a threat to our existing structure with centralized allocation of domain names, where it will first and foremost be less interesting which address a resource has, but also that it will soon be possible to federate the domain name structure.

I was sufficiently interested in this to install IPFS and start to experiment with the protocol. Maybe this is a nail in the coffin for HTTP, but given how long we have used the old protocol and how long it takes for people to change protocols (IPv6 was adopted as a standard in 1994 and we have not directly stopped using IPv4 yet…) I don’t think that HTTP is threatened just yet.

This article has no tags

About the blogger

Måns Jonasson Måns Jonasson Digital strategy A web enthusiast who is responsible for digital strategy at IIS. Måns has worked with the web since 1994 and during that time operated his own bureau, worked on several different communities and at a communications bureau.

Leave a comment

Reply to a comment

Required

Required

Optional

Comments

No comments yet.