“The future is already here.” – Interview with Google’s Ilya Grigorik



Wonder what technology innovation to expect within the next few months? What’s going to change? What people in Google are working on right now? We conducted an interview with one of Google’s most iconic representatives who shares a powerful message – The future is already here.

Ilya Grigorik is a web performance engineer at Google and co-chair of the W3C Web Performance Working group. He was also one of the speakers on this year’s two major conferences: Google I/O conference and SmashingConf. Ilya possesses great insight on what’s happening now and we asked him to share some of his technological insights. We’re delighted that Ilya agreed and we could  make this interview happen.

What 3 technology innovations or innovative development techniques will have the biggest impact on web developers in the next year or two?

There is a lot of really exciting and very important work happening at the transport layer: HTTP/2, QUIC, TLS 1.3. We could spend hours on each one, but in the interest of time, I’ll just mention a few highlights…

First, HTTP/2 adoption is growing rapidly, which is really great to see! However, while the RFC (7540) is done, there is still a lot of implementation and optimization work that we (the web perf community at large) need to do to make the most of it. For example, we need to build new benchmarks and testing suites for servers that actually exercise the new prioritization primitives (hint, it’s not just number of requests, but also how the bytes are sequenced); browsers need to get better at communicating priority information to servers; we need to develop new best practices and common APIs for server push; browsers need to rationalize how they treat and expose pushed resources to applications; we need to revisit our old “best practices”around how we deliver resources (bundling, etc), and so on. In short, there is still a lot of important work to be done here.

TLS 1.3 is nearing completion and—fingers crossed—we may actually see it rolling out by the end of the year, and worst case early next year. The spec contains a lot of cleanups, improvements around security and privacy, and of course… 0-RTT handshakes for repeat visitors! For the curious, check out the ekr’s talk for more details. Finally, IETF is kicking off a new working group for QUIC (soundbite summary: HTTP/2 over UDP), which opens up yet more optimization opportunities for improving mobility, performance in high latency environments, and so on—see Chrome’s experimental results.

Popping up a few levels, there is also a lot of really exciting work happening in the browser that will change how we build and deliver applications. For example..

  1. There is ongoing work to enable better “defense in depth” mechanisms to help deliver more secure applications: HSTS, HPKP, CSP, SameSite cookies, Feature Policy, moving existing and new powerful features under secure origins, and so on.
  2. The combination of ServiceWorker, Fetch and Cache APIs, push notifications, and a large supporting case of other smaller but no less important APIs is enabling us to build and deliver performance resilient applications: all critical resources live on the client, the applications are offline and poor-network friendly, background sync, push notifications, etc.

Best of all, much of the above is already available and is being used with great success in productionthe future is already here, it’s just not even distributed across all the browsers… yet.

IPv4 addresses have been depleted some time ago. When do you expect the internet to go fully IPv6?

Soon™. Of course, that’s what we’ve been saying for a couple of decades now, but the data shows a clear trend up and to the right: IPv6 adoption has been doubling every year. You can extrapolate from there.

Where would you like to see the Content Delivery Network market shifting in the next year or two? What would you like the product to look like?

I don’t have a crystal ball to see where the CDN market will head, but I can provide a few items from my own wishlist: 

  • Free and optimized TLS on by default for all customers: TLS 1.3; integrations with LetsEncrypt; TLS to an origin.
  • Optimized end-to-end HTTP/2 deployment (client to edge and edge to origin), with well-tuned support for prioritization, server push, etc.
  • QUIC…

What projects are you currently working on?

The bulk of my time is spent on making sure that web developers have the right tools and APIs to measure the performance of their applications. A big part of this is maintenance and improvements to existing APIs developed by the W3C WebPerf working group. The other half of my time is spent on debugging production applications to understand where the problems are, and then working with browser developers to figure out where and how we can fix these problems, whether we need to expose new APIs or primitives to help developers resolve them, and so on.
One big area of focus right now is on understanding how we can improve “post-load” experience: fast response to user input, smooth scrolling, energy efficiency, and so on. We have a fairly robust toolkit for optimizing the initial load, but not so much for the experience afterwards. Projects like requestIdleCallback and Long Task API are a couple recent examples of these efforts.

Which technology innovation, be it software or hardware, will have the biggest impact on the speed of the web itself?

I think each of the efforts I mentioned earlier will have significant impact. That said, I’ll call out QUIC once more from a different angle: one of the often overlooked properties of QUIC is the fact that its a user-space protocol, which means that we can experiment and ship new versions independent of the OS network stack. With QUIC we’re no longer limited by the ~decade long kernel update and upgrade cycle! This is a huge win, as it means that we increase our rate of learning and experimentation. We can learn how to make the web faster, QUICker…

If you’d like to read more of what Ilya has to say, we recommend reading his book: High Performance Browser Networking. It’s full of practical hands-on advice you can apply during the development process to build not only fast but also resilient apps.

Decide Better, Perform Better. New Features in CDN77 Client.

We built new features in CDN77 Client to provide data, which can help you in your decision-making process. For example, to change datacenters location and improve the overall performance.

In the Overview section, monitor the traffic per each datacenter and watch a corresponding line graph of the bandwidth.


Another new feature is in the Reports section, where you can now download graphs for Bandwidth, Traffic, Hits/Misses and HTTP Headers in the JPEG format.


Furthermore, you can download data about CDN Resources and Datacenters in CSV format.


Our Control Panel allows full data center control and you can easily activate or deactivate any PoP in our network to adapt to your growing traffic. All changes in settings take place instantly. With these new features, support your decision with hard data. Decide better, perform better.

Introducing Origin Basedir

We listen to our customers. We’re thus happy to introduce a new feature many of you have been asking for. Origin Basedir enables you to set a specific directory in your origin settings. This way, you can ensure the path to your files is hidden. Additionally, it can also help you bring order to your files.

Let me give you an example:

  • You have a file, say an image, and its URL is ‘example.com/path/to/directory/image.jpg’
  • With ‘example.com’ as your origin, your file is served from the URL: cdn.example.com/path/to/directory/image.jpg. This way, anyone can see the path to the directory of this file.
  • Origin Basedir enables you to set your origin to ‘example.com/path/to/directory/’ so your file’s new URL would be: cdn.example.com/image.jpg. Now, only you can see the path.

Another way to look at Origin Basedir functionality is to create a specific resource for every type of files – images, stylesheets, scripts, videos, software or pretty much anything you serve through a CDN. Here’s what I mean:

  • Make a directory in your host with all files and name it according to the type of the given file, e.g. “videos”
  • Set the origin of a new resource to: example.com/videos/
  • Optionally: set a cname of this resource  “videos”
  • Your videos will then be served from the URL: videos.example.com/video.mp4

Since we don’t limit our customers as to the number of CDN resources, you can create as many as you need.



We hope you’ll find this feature helpful. Make sure to let us know in the comments below. If you don’t have a CDN77 account yet, sign up now and try Origin Basedir yourself.

CDN77 partners up with KDE and Fedora

cdn77_sponsor_of_fedora-kdeFollowing up on the partnership with Gentoo and CentOS, CDN77 today expands the list of sponsored projects with two more open source projects – Fedora and KDE.

Partnering with these companies is our way of giving back to the community. And we’re giving back what we’re best at. From this day on, you can enjoy faster loading times of both Fedora’s and KDE’s websites thanks to our CDN.

KDE is a Free and Open Source software community which focuses on development of applications which are widely used in Unix-like as well as Microsoft Windows applications. For instance Plasma Desktop is a desktop environment today used in many Linux distributions. Other applications created by KDE are very well known today, mainly in the Open Source community: Calligra Suite – an office suite first released in 2010, Krita – a graphics editor or Doplhin – a file manager.

Fedora has recently released its newest version – Fedora 24 (which is only a step away from Fedora 42, which will answer the Big Question…right?) and even today it remains one of the most common Linux distributions for desktop computers. First established in 2003, Fedora’s main desktop environment remains GNOME, but you can pick KDE’s Plasma Desktop as well. The Fedora Project came to existence after Red Hat Linux has been discontinued. Fedora focuses on staying on top of the game in terms of innovation. Hence, the life cycle of each Fedora version is relatively short (approx. 13 months).

Welcome on board!

Introducing IP-Radar

Tired of constantly scanning subnets looking for a specific pingable IP? You can relax. We have something for you. At CDN77.com, we are proud to present our brand new online tool to help you find pingable IPs for any subnet, instantly – the IP-Radar. A tool we’ve been using for quite some time for our own purposes and we now decided to share it with you.

Our mission is to save time. Our CDN service saves time to your end users. This tool saves time to your network guys. Oh and one more thing, it’s completely free.  

If you are a network specialist and you administer a network, chances are you’ve experienced a situation like this: a client reports some network problems, however, in order to discover the cause of the problem, you need to know the specific IP address of the reference point within the target network.

Every team probably has its own method to approach this task. Although, normally you would have to go through subnets, look for IP addresses and then verify possible packet losses. We created a powerful tool to scratch our own itch that greatly facilitates the search for addresses.

It can now scratch your itch too.

IP-Radar is a unique database of pingable IP addresses on the internet. To find the desired IP address of any subnet worldwide, you simply enter an IP subnet or an Autonomous System Number. You will be then provided with the following information: list of subnets, pingable IPs in a given subnet, amount of received packets for each subnet and the Round-Trip delay Time (RTT). This can save network admins a great deal of time each month. It has surely worked for us.

IP-Radar Desktop | Mobile

IP-Radar Desktop | Mobile

Furthermore, you can apply the IP-Radar data in your scripts with the API interface, which would return the results of a query in JSON. Scripting API instruction can be found on the IP-Radar site.

The tool is available to everyone anytime at https://radar.tools.cdn77.com for free. Those of you dealing with issues on the road, there’s also a comprehensive mobile version.

We hope you’ll find this tool helpful. Make sure to let us know.

Improve your hit/miss ratio with Ignore Selective Query Strings

After Brotli, HTTP/2 and Let’s Encrypt’s SSL we’re bringing you yet another feature to boost the performance of your CDN resource. This time, we have mainly focused on improving your user’s experience and we now introduce the new Ignore Selective Query String feature.

The name of the feature is quite self-explanatory – it enables you to ignore selected query strings in your URL. This way, our servers will treat www.yoururl.com/?reflink=referral1 and www.yoururl.com/?reflink=referral2 as virtually the same. In situations similar to this referral example, the ignore selected query strings feature comes in quite handy. If you have multiple links leading to the same page, which differ only in the parameter (such as affiliate links, Google Analytics links with UTM parameters etc..) there is no point in having multiple copies of the same page cached on our servers. By adding “reflink” into the list of ignored selected query strings, visitors coming from referral links to your homepage will all be served the same page.

We have supported ignored strings feature for quite some time now, but today you can choose to ignore only some of the query strings on your website. Generally speaking, we do recommend to turn this feature on. Having the ignore query string feature turned off would result not only in multiple copies of the same page but also longer loading times for the end users.  Every referral URL would first need to be cached to be then served to other users. Plus we must take into account that this cache will expire and would have to be renewed in the future.

There are cases when enabling only the Ignore Queries feature might not be the right thing to do. For instance, your website has pagination parameters, but a change in the pagination parameter in a URL usually leads to a completely different page, with different content. This can now be resolved by selecting only those parameters, which do not change the content of a given page. Implement the selected ignore feature and you’ll be left off with fewer pages in the cache, higher hit/miss ratio, and much happier users.

In the image below you can see the settings of Query strings, which you can find in Other settings tab in any of your CDN resources.

For more information, check  our knowledge base on how to enable the query strings.

Ignore selected.

Why we don’t outsource our customer support

How many times have you encountered a “customer comes first” company responding in a not very helpful fashion? Whenever I deal with a customer support, there are only two possible outcomes. In an ideal scenario, I’m satisfied and my problem was solved. In many cases, however, I receive a reply that is either automated or tells me to wait a few days, neither of which helps. Lately, I’ve even noticed that robots are taking over customer supportWell done losing touch with your clients.

We know customer support could be demanding and, sometimes, frustrating. However, that’s the part and parcel of business and we love it. We’re here for our customers. That’s why we never even thought of a robotic or an outsourced support. Robots and some outsourced company don’t seem to know the problems you may face. Our engineers do.

Now, we realise robotic support is still rare and most companies have humans. Nonetheless, we have encountered many companies outsource their customer support. Especially the larger ones. As if their customers’ concerns didn’t matter as they let someone else, often on the other side of the world, take care of them. 

We don’t buy it.

I wonder how someone who doesn’t know a company product inside out can help me or you? I am sure many support companies do their best to help. Yet they are simply not capable of providing a qualified support to the broad range of queries and requests that may come up.

At CDN77.com, we hire engineers to do the support because we know they can solve issues. They know the tricks of their trade. It’s their job to know. They sit next to and work closely with the developers and some of them even become ones over time. It’s mutually beneficial as the developers can bear in mind what customers want or need during the development process. They stay in touch.

What I don’t understand is that it’s usually the big companies that outsource their support. Yes, they may be snowed under with piles of requests, but is that an excuse? Every company that takes their clients’ satisfaction seriously should scale up their customer support according to their business. This should apply to all companies regardless of their size. So why on earth would you outsource?


Every company, including CDN77.com, experiences difficulties. During peak hours, our tech support guys receive a new email request every ~2 minutes. I haven’t included our live chat, which is available 24/5 and we thrive to keep the average response time under 1 minute. Furthermore, in our industry, the vast majority of requests requires more than just “I’m really sorry for this {first_name} but I’m here to help. Send me your grandma’s date of birth, your national insurance number and your mother’s maiden name and we’ll fix it ASAP.” or “We’re sorry for any inconvenience that may have occurred. We’ll take a look at it. Please allow two (2) business days for a response.” The very fact that customers reach out to the live support implies that they expect things done as soon as possible. From a company’s perspective, it means that all such requests should be ASAP. At least for us at CDN77.com they are. When we say we’ll fix it, then we do whatever it takes to fix it right away, not tomorrow, not in two days.

With CDN77.com, no Kryten, C-3PO or Hal 9000 is responding to your queries and requests. Every enquiry goes to our 99% human in-house support guys. The remaining 1% relates to their occasional superhuman moments. Let’s meet them:


CDN77 Jakub – owing to his experience, he is rightfully considered to be the master Yoda of our support team. He lives by the wisdom “Do or do not, there is no try.”


CDN77 Marek tears every problem apart like a raging bull.


CDN77 Sabrina is able to manage multiple queries at the same time with a single goal: to solve them.


CDN77 Martin solves every problem dead calm like it’s a piece of cake.

CDN77 Raza, who is just finishing his Master’s in engineering, is too shy to take photos :) But when it comes to customer care, he often finds the most creative and helpful solutions.

CDN77 is a proud sponsor of CentOS and Gentoo

CDN77 CentOS Gentoo

CDN77 is constantly forming partnerships with open source and non-profit organisations. It is the least we can do to support the overall enthusiastic community that surrounds these kinds of projects.

This week we are happy to announce two more projects that will be powered by our CDN service, namely CentOS and Gentoo. Those names should ring a bell if you’re working in the IT industry. Using a CDN brings a significant improvement in the download speed for both the community developers and the end users.

CentOS stands for “Community Enterprise Operating System” and has been built upon code from Red Hat Enterprise Linux. CentOS has been ranked by Distrowatch.com as 8th most popular Linux distribution of 2015. More importantly, rankings provided by W3Techs website point out that CentOS is currently being used by 20% of websites which are using Linux – in other words it is the third most widely used Linux Distribution.

“Gentoo is a free operating system based on either Linux or FreeBSD that can be automatically optimized and customized for just about any application or need.” it says on gentoo.org. W3Techs.com has ranked Gentoo 5th in the most popular linux distribution table.

Welcome on board!

IPoAC | Revolutionary CDN technology

We know that you rely on our CDN. That’s why we are so eager to have 100% uptime. We strive to distribute your content under any circumstances and we had long been thinking about new ways to deliver your assets. Then it hit us.

What we see as a major step forward is the technology described in RFC 1149: IP protocol implemented over avian carriers (IPoAC). For the last few months, we have been beta testing and collecting enough pigeons to be able to cover any traffic peaks. On the 1st of April, we have rolled out the support for all users.

Each postal pigeon carries a memory card with a complete copy of your website so once the connection is open, the average transfer speed is extremely high. This kind of connection, though, may have difficulties with the time to first byte. We take the pigeons’ latency issues very seriously and are currently beta testing hawks as a possible replacement on the physical layer.

In some areas, we encountered significant packet loss. After a thorough investigation, we learned these locations correlate tightly with the natural habitat of bald eagle which is preying on our packets carrying pigeons. For that reason, the PoPs in North America will not support IPoAC at this moment. We believe once we deploy the hawks into production, we’ll be able to roll out the support in North America as well.

This delivery method is in line with our commitment to protect the environment. While Facebook and Google try to extend the reach of the internet by drones and balloons which are both expensive to manufacture and take their toll on ecology, our pigeons are procreating naturally, with a great pleasure and running them brings absolutely zero carbon footprint.

As with Brotli and HTTP/2, we are happy to be once again the first CDN provider to roll out a new technology. And here is a sneak peak on this new technology package.

Our Crew:


sgt. Pigeon Jacob

Lt. Pigeon Mordor

Lt. Pigeon Mordor

New recruits from bad neighborhood

New recruits from bad neighborhood

WordPress SEO fine-tune by Meta SEO Pack from Poradnik Webmastera