Blink of an Eye

It was only a few years ago that a 2-second response time was considered great. The highly competitive Internet world (and possibly the high frequency trading in Wall Street) has changed all that. The blink of an eye normally takes 300 – 400 milliseconds. Human beings can easily perceive time of a couple of one-hundred milliseconds. Google says users will visit a site less often if it is 250 ms (1/4 sec) slower than its competitor.

So the new end-user response time goal to shoot for is 250 ms in the fast changing and dynamic Internet world of e-commerce and advertisement. I do not believe that the corporate world has moved in the same direction for response time for its internal applications – 2-second response time may still be good enough; otherwise APM industry would suddenly explode in revenues!

What is interesting is that this tall order in user expectations is carried on to the Smartphone and mobile network world as well. If we have to understand where time is spent in an application transaction we have to examine the physical path of a transaction-

1. The PC, laptop, tablet, or a Smartphone (say in New York)
2. Local area network or Wifi or mobile network
3. ISP network or a private corporate network
4. Main Server complex (say in San Francisco)
5. Additional servers (such as Ad servers for a web page)

Let us assume that we have the latest and the greatest client device and ignore the delay from item #1.

Going to item #2 – in a wired LAN environment (lightly loaded) the roundtrip latencies are of the order of a few milliseconds consistently. In an uncongested Wifi network, depending on the frequency band (2.4 GHZ or 5GHZ) the roundtrip latencies are in the range of 1 -10 ms.

The roundtrip latencies for the mobile networks are heavily dependent on the generation and the underlying technology used – so it depends on which network you home on (2G, 2.5G, 3G, LTE, etc.). AT&T sets roundtrip latencies of 40 -50 ms for LTE. But for the older technologies HSPA and HSPA+ the ranges are 100-200 ms and 150-400 ms respectively. For the much older EDGE and GPRS networks it is 600-750 ms.

Item #3 depends on the relative location the client device and the server complex – in this example, the roundtrip latency would be 42 ms. If the server complex is London, Bombay, or Sydney the roundtrip latencies will be higher – 56 ms, 126 ms, and 160 ms respectively.

What all this means is from a latest Smartphone, a single roundtrip from NY to SF over an LTE network would cost about 100 ms. Most applications have to first set up a TCP/IP connection before requesting any data from the server. This implies that for a single request-response interaction we would spend at least 200 ms. So even for this ideal application and where all other performance factors are perfect, we are already approaching this desirable limit of 250 ms.

There is hardly any application which has a single request-response pair – what we call application chattiness as the number of request-response pairs could be in tens or even hundreds. If a Smartphone App has 10 roundtrips which is not usual, we already are looking at a one-second response time!

In this article we examined how the bar on the end-user response time has risen to the blink of an eye and how it is a challenge to achieve this tall order even in an ideal situation. We focused on latency issues as it is the limiting factor -we have not even considered server, database, bandwidth, and application coding aspects which could be even more adverse. But there are innovative APM best practices and techniques to address all these issues and approach this challenging goal (which could be the topic for many future articles).