Setting up Linksys E1200 as a wireless bridge

I get my internet connectivity from MTNL. They’ve given me a pretty rudimentary ADSL modem/router (wireless) – Beetel 450TC1. The range of the MTNL router doesn’t quite cover  the entire house. Hence the need for a repeater/bridge in one another location. Now it’s not possible for me to run a cable between the two routers and hence I need it to be a wireless bridge/repeater.

The MTNL supplied Beetel 450TC1 is my primary router (let’s call it R1) , and I needed a secondary router (let’s call it R2) to work as a wireless bridge/repeater.

So today I nervously bought a Linksys E1200 to be used as the R2 for my setup. I planned to put DD-WRT on it and setup it up as bridge/router.

The E1200 doesn’t have bridging capabilities right out of the box. You’ll need a firmware such as DD-WRT to unleash the full potential of the router.

Flashing the DD-WRT on the E1200 – Follow the steps mentioned here – to the T and your router should be all upgraded and ready to be setup for the wireless setup. I haven’t bothered to use the very latest firmware, as the vanilla build mentioned in the steps works flawlessly for me.

Setting up the E1200 for wireless bridging/repeating – Follow these steps to the T and you should be good –

E1200 is rated by Flashrouters as one of the best routers to setup as a dedicated repeater. For Rs. 2999 at Croma retail it’s not a bad option at all.



NodeJS vs. Tornado benchmarking

I ran an Apache Benchmark test on similar NodeJS and Tornado Webserver instances. Here are the results –

ab -n 10000 -c1000

Tornado (disabled console logging via --logging=none)
Server Software:        TornadoServer/2.4.1
Server Hostname:
Server Port:            8888

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      1000
Time taken for tests:   145.729 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1700000 bytes
HTML transferred:       120000 bytes
Requests per second:    68.62 [#/sec] (mean)
Time per request:       14572.901 [ms] (mean)
Time per request:       14.573 [ms] (mean, across all concurrent requests)
Transfer rate:          11.39 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   14 203.1      1    3012
Processing:    88 14113 3198.7  15748   18858
Waiting:       15 8399 4052.4   9394   15839
Total:         88 14128 3200.1  15749   18859

Percentage of the requests served within a certain time (ms)
  50%  15749
  66%  15780
  75%  15801
  80%  15813
  90%  15839
  95%  18794
  98%  18843
  99%  18854
 100%  18859 (longest request)


Server Software:
Server Hostname:
Server Port:            8888

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      1000
Time taken for tests:   124.844 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      1130000 bytes
HTML transferred:       120000 bytes
Requests per second:    80.10 [#/sec] (mean)
Time per request:       12484.435 [ms] (mean)
Time per request:       12.484 [ms] (mean, across all concurrent requests)
Transfer rate:          8.84 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   12 187.1      1    3009
Processing:   183 11937 3366.1  12766   18837
Waiting:       21 6747 3388.1   6419   12817
Total:        184 11949 3367.8  12766   18839

Percentage of the requests served within a certain time (ms)
  50%  12766
  66%  12792
  75%  12815
  80%  15760
  90%  15812
  95%  15826
  98%  15840
  99%  15842
 100%  18839 (longest request)


Observations –

  1. NodeJS spits out lesser HTML response headers than Tornado for a similar response output (see total transferred)
  2. Time taken to perform the benchmark is lesser in NodeJS at 124.84 seconds as compared to Tornado at 145.73 seconds
  3. NodeJS gives a better throughput at 80 reqs/sec as compared to Tornado at 68 reqs/sec

Run CentOS on your Windows laptop

Here are the steps to run a virtual instance of Linux CentOS on your Windows laptop using Oracle VirtualBox –

Here are the steps to run a virtual instance of Linux CentOS on your Windows laptop using Oracle VirtualBox –

Downloading the essentials

  1. Get Oracle VirtualBox from
  2. Get CentOS 6.3 ~ 64bit from AOL India servers -> (assuming you’re in India, else get your CentOS from another mirror). I had downloaded CentOS minimal which is about 300+ MB.

VM setup

  1. Go ahead and install Oracle VirtualBox. This should be an easy step.
  2. Now it’s time to setup your Linux instance….
  3. Click “New” and go ahead and give the new instance a name.
  4. Select Type:”Linux” and Version:”2.6″ (64 bit). Hit Next.
  5. Change recommended memory size to 512 MB.
  6. Select “Create a Virtual Hard Drive now” > Next > Keep VDI selected. Hit Next.
  7. Select “Fixed size” > 2.0 GB. Hit Next.
  8. The wizard should now close.
  9. The instance you just created should show up on the Left hand bar in the “powered off” state.
  10. Now right click on the instance you just created. Hit Start.
  11. When prompted to provide a start disk, select the CentOS 6.3 ISO that you had previously downloaded. Hit Start.
  12. Now follow the instructions to install Linux just like you would on a bare metal box.
  13. If you plan to run web-server or an app-server on this CentOS VM then you’ll have to change the networking mode for this VM from NAT to Bridged. This ensures that your VM will get an IP address from the same DHCP source as your Windows Laptop.

Making CentOS ready

  1. Now log into your CentOS instance via the console
  2. Once you’re logged in as “root”, run -> ifup eth0. This will bring up your ethernet interface.
  3. Now your instance will have a “real” IP address. To check, run -> ifconfig
  4. Now open up “/etc/sysconfig/network-scripts/ifcfg-eth0” and change ONBOOT to “yes“. This will ensure that you don’t have to perform step #2 above whenever you bring up your VM instance.
  5. From now on you can SSH into your instance (via putty, if you prefer) via the IP address of the machine – as found out in step #3.
  6. Now let’s change an ipfilter rule to allow HTTP traffic to the VM instance.
  7. Open up the /etc/sysconfig/iptables file and add the following rule
    • -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
    • The above should be added just below the this line “-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT” in the file
  8. You’re all set. Now go ahead and run -> init 6 for a VM reboot.
  9. Now go ahead and install Apache or any other web server and  it shall be ready to serve on port 80 since we’ve opened up the port via step #7.


Hope the above steps will help you take the first few steps towards understanding virtualization and running your own virtual linux instance under Windows.

How to reduce your VOD bandwidth bill by upto 30%

If you’re a large media organization that publishes a lot of Video on Demand (VOD) content you may want to read the below post on how to save more than 30% in VOD bandwidth bills without compromising the quality of your videos or the degrading the user experience.

Objective – Save bandwidth costs on VOD without any changing the video itself


Current mechanism – 

This is how all the existing flash based video players play out the VOD content  – when the user starts playing out a video, the player starts downloading the entire video in the background. This happens irrespective of the fact whether the user is going to watch the entire video or not. With high speed internet connectivity these days, the time taken to download the video is usually much faster than the playout time of the video.

For e.g. say a user is watching a 2 minute video clip which could easily be 20 MB plus depending on the encoding quality and FPS. As the video starts playing, the player starts downloading the rest of the 20 MB video in the background  What if the user moves on to another page or website after watching only half of the video?

There are two problems with the current approach –

  1. Wasted bandwidth – As you can guess, in the above example if the user only watched half of the video then the bandwidth is wasted downloading the entire clip. This is wasted bandwidth both for the content provider as well as the user. They’re both being charged for the bytes transferred.
  2. Blocked server connection – as the video player is downloading the rest of the video in the background, it results in a blocked server-side connection until the video is fully downloaded.


Proposed mechanism – 

Summary – Change the flash player to fetch the video in segments instead of downloading it all in one go.

Say when the user starts playing the 2 minute video, only the next 30 seconds worth of video is pre-fetched. As the user approaches the 15 second viewing mark, the next 15 seconds worth of video is downloaded. A new 15 second video segment is fetched every time the user finishes watching the current 15 second segment. Every time the video player needs to fetch the video, it establishes a new connection and releases it once the segment is downloaded.

In the above mentioned approach, the video is downloaded in chunks depending on how much the user has watched and not all at once.

There are two advantages with the proposed approach –

  1. Reduction of wasted bandwidth – Say if the user watches only 1 minute of the video, the downloaded part of the video would be for approximately 1 min, 15 seconds – 12 MB. A saving of 8 MB from the previous scenario. That’s a 40% saving in bandwidth consumed both for the user as well as the video hosting provider.
  2. Better scalability –  every time the video player needs to fetch a segment, it makes a new connection to the server which is released as soon as the segment is downloaded. This mechanism ensures that a connection to the server is not blocked until the entire video is downloaded. This is akin to a connection pool mechanism which frees up the server to serve other connections.


Assumptions – 

One of the key assumptions in the above approach is that a user may not watch the full length of the video. The saved bandwidth comes from the part of the video that the user abandons or doesn’t watch. As a part of an experiment, I had instrumented the video player for one of our properties to find out whether people watch VODs in the entirety or abandon them midway. On one of our high traffic properties we found that almost a third of the users don’t end up watching the video till the very end.


Actual live results –

I have been lucky enough to try out this theory on heavy-traffic sites. The results have been fantastic. No user complaints about the video experience, and we’ve ended up saving over 30% costs via reduced bandwidth bills. Want to see the video live in action, check it out here on Moneycontrol videos. You’ll notice the experience is pretty seamless.

This is purely a client side solution (and so easy to implement) without you having to change anything on the server-side or in the video itself. It’s been tested and works fine with both Akamai and Tata/Bitgravity CDNs. The approach works for both FLVs as well as (hinted) MP4 video files.

ps: Would you like to know how to modify or write a  flash video player that would download videos in segments or chunks instead of the entire thing at once? Let me know via the comments. If I get enough requests, I’ll write up a post explaining various approaches on how to do it. Else, I’ll assume you are a smart bunch that knows how to do it.

Latency numbers every programmer should know

L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns             
Compress 1K bytes with Zippy ............. 3,000 ns  =   3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns  =  20 µs
SSD random read ........................ 150,000 ns  = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns  = 250 µs
Round trip within same datacenter ...... 500,000 ns  = 0.5 ms
Read 1 MB sequentially from SSD* ..... 1,000,000 ns  =   1 ms
Disk seek ........................... 10,000,000 ns  =  10 ms
Read 1 MB sequentially from disk .... 20,000,000 ns  =  20 ms
Send packet CA->Netherlands->CA .... 150,000,000 ns  = 150 ms

Latency visualized

source -

Online apps I cannot do without

I was reading someone’s blog post listing the online apps that the author couldn’t do without. As I read the list, I mentally started making up my own list. Well, here is mine (in no particular order)-

  • WordPress – Cool blogging platform. Let’s you write a blog post without too many distractions. Usage – once a week.
  • Google Reader – My daily peek into 45 sources of information that I want to consume. I don’t read all my feeds, but a quick cursory glance is guaranteed.  Usage – daily.
  • Gmail – I started with a “free” Hotmail account waaay back in 1996. While I opened a Yahoo account too, it was never the one for me. I was a happy :-) Hotmail user until I setup my own mail server a few years ago. And then when Gmail launched, I switched over to it.  Usage – keep it open whenever I am online.  My Blackberry connects me to my email at other times.
  • Google – We all know why.  Usage – I use it so much, I guess they’ve put up a dedicated server just for me :-)
  • Timesofindia – I loathe the site, but twice a day I goto there to do a quick 3 second scan and then close it. If any news is big enough, they’ll put it up in big font. So the 3 second scan does it for me. Usage – insignificant
  • burrp!TV – I don’t watch TV, but on days when I want to catch a movie on TV, I goto burrp!TV. – Usage – twice a week
  • ICICI Direct – Competes with the TOI for the most kachra (garbage) site ever. Usage – once a fortnight, just to make sure that none of my shares did a Satyam on me.

That’s all. While I might visit Facebook, LinkedIn, Amazon etc once in a while, I don’t use them on a daily or weekly basis. As you can see, there are NO indian sites that compel a once a day usage. Do you know of any?