# What question do you like most from a prospect customer?

The question I like most is – “so how much money do you save me?” That’s usually a sign that we have shifted gears from pre-sales to negotiations. Even better, negotiations now start from the value we bring rather than the cost we incur.

Wholesale data capacity is quasi-commoditized (see Capacity Magazine), so market prices are known by the parties. Therefore it seems simple to calculate the value in replacing physical capacity with virtual capacity. Let’s consider an ISP with 3Gbps, paying $50/Mbps/month. By using DiViCloud, the ISP can now add 1Gbps of virtual capacity over the existing 3Gbps link. Seems simple – we provide a value of 50 x 1,000 =$50,000/month or $600,000/year. It doesn’t end there. To add 1Gbps of traffic the ISP would typically take additional 1.2Gbps capacity, to avoid congestion. DiViCloud, on the other hand, generates virtual capacity, so there is no need to take spares; the traffic simply does not load the network. The alternative cost of 1.2Gbps physical capacity is$60,000/month or $720,000/year. Moreover, DiViCloud’s capacity is charged by consumption, based on 95/5 model (see Dr. Peering). Physical capacity’s fees are usually for committed bandwidth, whether used or not. To compare apples to apples the ISP should consider the alternative price of burst-capacity rather than committed. Naturally this figure is higher. For example$65/Mbps/month, rendering DiViCloud’s value as $78,000/month or$936,000/year.

95/5 pricing model; Dr. Peering (http://drpeering.net/core/ch2-Transit.html)

The typical question at this stage is – “Yes, but all you are doing is deferring costs and not replacing them. As demand increases we will eventually have to purchase additional physical capacity”. Well, if this were the case, then once the physical capacity is in place the ISP could turn off DiViCloud.

Physical capacity & virtual capacity mix

Virtual capacity is equivalent to adding an upstream pipe; it provides perpetual benefit regardless of the basic physical capacity. Actually, the ISP should always plan to take about 70% of its capacity from physical upstream providers, while 30% will be provided as virtual capacity at Half Price.

Relevant Posts:

# New DiViCloud PoP in Sydney, Australia

We’re happy to announce the DiViCloud network expansion with our newest DiViCloud PoP in Sydney, Australia. We now operate 12 PoPs worldwide.

The DiViCloud PoPs Map

DiViCloud PoPs are located close to content sources, rather than close to eyeballs.The opposite is true for a CDN which is placed close to the content consumers. By selecting such locations, DiViCloud can apply the technology on almost all the traffic transferred to the ISPs and thus generate more virtual capacity.

How do we know where to place PoPs? We continuously analyze traffic sources and routes. As we see more traffic originating in a new location we conduct an economic analysis for capturing this traffic. We need to conclude if the price gap can be bridged with this part of the traffic.

Since we have started serving virtual capacity to the pacific, we learned that more and more traffic originates in Australia. The main driver being CDNs, who established and scaled up their Australian PoPs. Wholesale capacity economics in Australia and the pacific is challenging jigsaw puzzle; and we have solved it.

Using our new PoP we can now improve our service to Australia, New Zealand and Pacific ISP customers.

In my recent post I highlighted the differences in IP transit cost between locations. At the low end of transit costs are those locations where content is generated – major cities in which the major content server farms are located (typically in the US and in W. Europe major cities). Baseline prices of $0.5-1 per Mbps per month are the actual cost for placing it on the web. From there on, prices start to rise as the factor of transport costs go up and as the distance grows from the content source creation to the content consumption destination. The further away the eyeballs are, the smaller the market is. The fewer the transport alternatives are, the longer the transport chain is – and the price of transport (obviously) increases. Actually this model is no different than shipping any other goods. An orange in California costs$0.5 per kg in wholesale prices. In Vancouver, wholesalers charge $3 since they have to pay the$0.5 and an additional $2 transport fee, plus they want to make a profit. The supermarket owner in the remote Whitehorse, Yukon, pays$12 for the oranges. What starts off at $3 in Vancouver, plus a cascade of transporters all the way to Whitehorse inevitably drives up the cost to sell those oranges. No one is ripping anyone else off in this process. But is there a way to provide affordable oranges to Whitehorse? What if you could just teleport the oranges from California to Whitehorse? What if this teleportation could be achieved at fractions of the transport cost, and without involving any middlemen? That’s exactly what we do at DiViNetworks; for bits, not oranges. We are able to teleport 30-50% of the content from its source to any destination worldwide, without loading any transport, and over any combination of transport networks. No data is lost along the way. That’s what we term VIRTUAL CAPACITY. We share the price gap between our cost and the market IP price with our customers, guaranteeing that our customer ISPs pay HALF PRICE for the additional bandwidth. Beam me up Scotty for a Free 14 Day DiViCloud Trial Follow our LinkedIn for more information and statistics on International Bandwidth. # Euro 2012 – TV is still king (but watch the throne) In the aftermath of Euro 2012 (and no, I’m not trying to replace Prandelli’s…) we learn one clear lesson – TV still dominates live video consumption. The figure below (source: RIPE’s study) shows traffic in DE-CIX Munich Internet Exchange during the Germany-Greece match (22 June), compared to traffic same time in previous weeks. As people get ready for the match – driving to friends, catching a nap, cooling the beers – Internet traffic declines. During the break they turn to check out what others say on the net. Traffic seen at DECIX Munich during Germany v Greece match on 22 June 2012 Yesterday’s final was no different. Check out TOP-IX - Torino’s Exchange point – traffic stats. Traffic seen at TOPIX Torino during Spain v. Italy match on 1 July 2012 So TV is still holding the throne for planned live events. Yet, we are keeping a close look on two trends: Near-live traffic is booming. Missed the goal? Want to hear the Spanish Goooooal? Wish to poke your Italian friends? Go to the web. Many events are not freely accessible on TV. Some events are premium, whereas others are just not broadcasted at all places. DiViNetworks serves many territories where people turn to the Internet to take part of such mainstream events. One example is presented in the graph below, demonstrating traffic growth during a soccer match, as well as DiViLive‘s capability to flatten live traffic. The red marks the traffic actually passing on the link, and the green marks the virtual capacity generated by DiViLive (operating on live and near-live data). The traffic added due to the live event is shrunk to 10% of its original size. Traffic during a soccer match flattened with DiViLive # Can Broadband Access Heal The World Economy? (To be discussed at G20) In an open letter the ITU (International Telecom Union) urges G20 leaders, meeting in Mexico next week, to define targets making broadband affordable in all countries. ITU claims that Broadband (BB) is the remedy to recession and recommends top-priority targets: 1. Universal BB policy – all countries should have BB plan 2. Affordable BB – by regulation and/or market forces 3. Connecting homes to BB – 40% of homes in developing countries 4. Getting people online – 50% of population in developing countries should be Internet-literate Providing affordable BB in developing countries is not a simple task. Take a look at the table below depicting two cases, serving a territory with population of 500,000 in developing vs. developed country. Apparently, the transport cost, just to ensure reasonable ROI, is highly sensitive to physical distance and link utilization, rendering transport to developing countries extremely expensive. Carriers are therefore reluctant to invest in such links, making international data transport a monopoly exactly in those cases in need. Regulation can only press vendors’ profit margins. Market forces are totally irrelevant in the developing world. Connecting developing countries to the world is therefore up to pseudo-philanthropy (a la World Bank), or to technological solutions changing the table above. And guess what – such are DiViCloud and DiViLive. # What’s Common to Helena, Montana and Cochabamba, Bolivia? (hint: data capacity cost) We’ve often been asked if virtual capacity is relevant only for developing countries, or are data-optimization-services required in USA and Europe too. So we hit the road, met a bunch of ISPs in rural USA, participated in a WISPA event, and started working with distribution channels. The traffic mix in rural USA is not significantly different from other places, and thus DiViNetworks’ guaranteed 30-50% capacity expansion can be reached. Calix did a great job, and analyzed 45 rural ISPs (here). Traffic mix in rural USA ISPs You can also learn that even a small ISP with 1,000 subscriber will need about 500Mbps Internet capacity (36.7GB per sub per month, assuming 6 hours effective per day). In most cases only one carrier is laying fiber to rural towns (a.k.a. middle-mile), spending$25-60K per mile, and expecting reasonable ROI. Wholesale prices range between $20/Mbps/month and$200/Mbps/month. That’s without counting the backhaul often required. In that sense Helena, Montana is no different from Cochabamba, Bolivia.

Simple calculation shows that even a small ISP will have to spend \$20K per month (500×40), making Internet connectivity a huge obstacle to profitability.

The thousands of rural ISPs, and tens of thousands of rural campuses, for which DiViCloud can virtually expand capacity by 30-50% make an interesting opportunity. With our US PoPs at major Internet junctions, this will soon become a reality.

# Infographic – To be or not to be Unique?

We did a little bit of research to find out if our web surfing habits are really unique like we would all like to believe, or, are we all viewing the same content?

Click on the infographic (you may want to zoom in), and you’ll find out  that 50% of the data traffic has already passed the network – even in a short window of 6 hours. Isn’t that a waste of expensive bandwidth?

In Yair‘s post “we are all individuals,” it was explained that with as little as 150GB of storage at the network edges, most of this redundant data can be saved without any deterioration in service.

# Cache strategies are diverging

In the last year or so the Internet caching market returned from its hibernation decade, straight onto the radar screens of leading telecom equipment manufacturers – fixed and mobile alike. (for example Juniper and Cisco)

During this long cache winter independent software vendors have invested in developing solutions which may share the same name, yet address different needs – capacity savings vs. quality of experience (QoE) improvement, managed content vs. over-the-top (OTT), content specific vs. application specific vs. everything-HTTP vs. protocol agnostic, saving peering vs. saving transport, located at the network core vs. located at network edge.

This spaghetti of solutions triggered operators and ISPs to redefine their cache strategies. Whereas previously an ISP would just require a single file-caching system in the entry to its network, nowadays legacy file-caching is no longer effective for most of the Internet content (which is just not files anymore but rather fragments of streams, example here), and CDNs replace much of file-caching traditional role. The new environment requires new solutions.

Networking gear providers are taking a close look at this market, rehearsing before they play an active role. Speaking to such players I have witnessed significantly different points of view, perhaps emerging from the different positions such players come from.

Yet, they all share the same concerns – cache servers, as the name implies, serve content. They act as a “mini Internet in a box.” Placing such an entity in the network (a) puts constraints on routing and traffic engineering (b) requires endless goose chase after the evolving Internet, its protocols, formats and business logic. The independence of the network and the IT may be jeopardized by introducing application manipulators into the network.

Whereas such approach can be contained for managed content, it raises many concerns with regards to OTT. Content not managed by the operator should arrive from the content providers directly or through their representatives – namely CDNs. If you worry about peering or transport costs, expand your bandwidth using virtual capacity – a content-agnostic solution, equivalent to addition of bandwidth.

From the core southbound – let the network be a network. Make sure you network is efficient as possible by expanding it virtually for all bits without application, protocol or content discrimination. This way you get best of all worlds – optimal network, full design flexibility and zero application-level configuration.

# How much data is needed to describe a message?

An old joke tells of a man sending a telegram to his brother inviting him to his son’s wedding. Initially he writes this message:

Dear brother,
You are invited to my son’s wedding, two weeks from now.
Looking forward to meeting you.

After learning the price of each word he reasons that Dear brother is redundant since his brother is going to be getting the message by hand, and he already knows how much he loves him. The words you are invited are also redundant, since it is obvious that once there is a wedding his brother is invited, similarly Looking forward to meeting you can be deleted. The words my son’s are also redundant because it wouldn’t make sense to report another man’s wedding, and so the man ended up with the shorter telegram:

wedding in two weeks
which exactly captures the information he wanted to transfer to his brother.

Given a message, in information theory we try to assess how many bits are needed to encode this message, or in other words given an encoded message how many bits are redundant and can be deleted. To do that we first need to quantify the amount of data encoded in a message of $n$ bits. In information theory this is referred to as entropy. The higher the entropy the more data is encapsulated in those bits (therefore fewer bits are redundant). Given a message $M = (m_1, m_2, ..., m_n)$ of $n$ symbols over an alphabet $\sigma = {\sigma_1, \sigma_2, ..., \sigma_s}$, of $s$ letters the entropy is given by this formula:

$Entropy(M) = - \sum_{i=1}^s Pr(\sigma_i)\log Pr(\sigma_i)$,

where $Pr(\sigma_i)$is the probability of an arbitrary letter in $M$ to be $\sigma_i$. In the simple case, $Pr(\sigma_i)=\frac{number\;of\;times\;\sigma_i\;appears\;in\;M}{n}$.

As can be easily seen the higher the entropy is the more random the message seems (i.e. There are less patterns in it), on the other hand, the message aaaaaaaaaa….aa has entropy 0 (remember that $\lim_{x \rightarrow 0} x\log x = 0$. For compression, we want to re-encode a message (with fewer bits) such that its entropy increases. In encryption (the other end of the spectrum), on the other hand, we want to re-encode a message (with the same number of bits) such that its entropy increases.

The joke we opened with goes on, as the man decides to delete the word wedding (because what other reason is there to be sending a telegram in the first place), and the words in two weeks (which is the appropriate time prior a wedding to be sending invitations), and so he returns home without sending any telegram at all.

# We are all individuals

The memorable scene in Monty Python’s Life of Brian (1979) perfectly reflects the Internet reality. Two millenia have passed, and we are a 7-billion individuals herd. We consume the same content – perhaps from different sources, using different methods and devices, at different times and locations – but we all do the same.

Self-similarity in Internet traffic is amazingly high. It is so high that within 10 minutes about 48% of a sizable-ISP-traffic is not new anymore; it is merely a combination of previously transferred content. An even more ego-oppressing finding suggests that not only that we are very similar to many other people, but from the whole huge Internet we are practically wandering in a domain no larger than 150 Gigabyte at any given time.

Does this mean that you can just cache 150GB and reduce 50% of the Internet load? Actually not. These 150GB are changing all the time, they emerge as fragments of diverse sessions from different sources and in different protocols. File caching would just not work. It takes a real-time bit-level system to leverage our herd-behavior.

Follow our future posts, in which we will share detailed and graphical findings.