Making sense of the internet
Support the Research

Get data. Get insights. Help affect change.

View joining options
DNSRF Corporate Logo - words and line shorter version
Search
{item._type | case 'page' 'Web page' 'blog/blog' publication 'adnewsfeed/news' 'News' 'docs/article' 'Docs'}{item.section.title} / {item.chapter.title} / {item.topic.title}  | {category.title}
{item.publishDate | date 'DD MMM YYYY' | append ': '}

Blog

Herding CATS

Media current

Not so long ago, much of the Internet worked like this: your computer made a request of a remote computer, the remote computer responded, and your computer displayed the results. This is an oversimplification of the end-to-end model of communication on the Internet. The Internet hasn't worked like that for years. Still, many people remain under the illusion that Internet communications is a direct conversation between a local or mobile device and a remote server.

It's been decades since the Internet worked like that.

We Only Take Cache

Large organizations with popular content would be overwhelmed if they had to provision enough network bandwidth and server power to support the elderly end-to-end model. Early on, in the Internet's first period of stunning growth, network engineers realized that if they could put copies of popular content closer to the user, both the user and the provider of content would benefit. The consumer of the service would get better responsiveness, and the content provider would get reduced network costs and, potentially, improved network performance and reliability.

The idea of having copies of content for future use has a long history. Your browser keeps copies of web pages it has recently retrieved so that it doesn't have to retrieve them again if you want to go back and look at them. All sorts of Internet applications keep copies of passwords, configurations and other information to improve the user's experience. These copies are called caches, and we say that the browser caches content locally to provide a better browsing experience.

An entire industry has grown up around provisioning and operating caches - or content delivery networks (CDNs). CDNs are an essential part of the Internet's infrastructure.

Why Stop There?

The delivery of content on the Internet is sophisticated and complex. What looks like a single web page may be provided by multiple web servers, and the same web page might be delivered by different web servers in different parts of the world. CDNs emphasize and optimize the delivery of content. However, content is just one part of the Internet.

Over two decades ago (can that be?), Amazon created a subsidiary called Amazon Web Services (AWS). AWS allowed developers to not just share content from servers all over the world, but to also build applications that could be shared. The idea was that if you developed an application in one country, you could deploy it anywhere in the world where AWS provided services. In essence, a Internet delivery service for applications.

The "virtualization" of an application means that it could be written once and then deployed wherever needed. Applications became much more responsive to users, and the network reliability and flexibility paid dividends to the application developers.

Life on the Edge

Taking this a step further, what if you could distribute both storage and computing power as close to the place where it was needed - and only when needed. Rather than paying for computing power you might not use or storage that you might not need, a new approach would bring the computing and storage, as needed, as close to the user of the services as possible. This idea is called edge computing.

Edge computing is an architectural adaptation of networks that provides timely access to virtualized services. This adaptation makes the services more scalable (they can support more significant numbers of users and services), more reliable (if a service encounters a problem, another instance of that service can step in), faster (because the services are close to the user and not affected by topological distance in the network) and efficient (analytical and AI tools can increase operational efficiencies).

Edge computing is already an essential part of mobile networks. Edge computing also supports the dramatic increase in devices connected to the Internet of Things (see edge computing wikipedia for references). One consulting group has noted that while only 10 per cent of enterprise-generated data is created and processed outside a traditional data centre, by 2025, that number will be 75 per cent. The rise of the remote worker is only part of that story.

Power Steering

Would it be possible to "steer" Internet traffic to the edge resources that have the highest level of availability, or the least expensive resources, or even the greatest amount of available computing power? And, if possible, could there be standards in place to help make "traffic steering" interoperable between networks?

Both the IETF and the ITU-T have standardization work in just this area. In Study Group 13 at the ITU-T, significant work is in progress on standardizing the coordination of networking and computing.

Traffic steering has a long history in the IETF, but a recent development bears watching. In 2023, a new Working Group called Computing-Aware Traffic Steering (CATS, for short) was created. CATS is a working group that looks at the problem of how the network edge can steer traffic from clients to sites offering a needed service. The words "needed service" may include content, computing power or storage. The steering takes place on measures in the network such as bandwidth, latency, capacity, availability and capabilities.

CATS is a remarkable IETF working group in that, although 23 different Internet Drafts have been submitted for CATS consideration, only one has been adopted for official work by the Working Group. The single document is a problem statement, set of use cases and requirements.

In the IETF, CATS is different from other "traffic steering" working groups (such as the now concluded ALTO) in that CATS is responsible for developing an architecture. ALTO, for example, developed an HTTP-based protocol that allows a host to choose optimal paths to resources from a server that has more extensive knowledge of the network.

Why Does It Matter?

CATS is important because it reflects an IETF response to a growing set of requirements: how to steer traffic to the best resources available. The IETF is a natural place to do the work, but we've seen how the ITU-T (especially those from mobile networks) would like to fashion standards for the similar use cases. It's also essential that the architectural approaches to traffic steering in traditional Internet settings as well as in mobile networks are not too dissimilar.

CATS intends to adopt a Framework and Architecture document this summer - possibly at its Vancouver meeting. The ITU-T's Study Group 13 meets in March this year and will be considering its own "traffic steering" solutions. How different they are will be the topic of a posting later this year.


Thank you for signing up for our mailing list.
Unfortunately we could not sign you up for our mailing list at this time. Please try again later

Latest posts here

Top