TCP's slow-start and exponential backoff algorithms are absolute murder when your pipe has a large bandwidth-delay product. There are a whole slew of UDP-based implementations (and a couple TCP variants) targeted at large data transfers over such pipes.
The underlying problem is trying to achieve fairness not only between instances of the new protocol itself (which this implementation and many others claim), but also fairness with instances of the legacy TCP protocol (this protocol's claim seems to be a little weaker here).
Yep. FastTCP is a pretty elegant algorithm that happens to covered by all sorts of patents. Was a company behind it as well that seems to still be doing OK (FastSoft). Unfortunately won't be turning up in Linux et al due to those angles :-(.
There is also TCP/Vegas, which is based on the same idea, as good as FastTCP for all practical intents and purposes, comes with Linux 2.6 and unencumbered by the patents.
I played with this and its not faster than other TCP. In fact my implementation of NetBLT (http://www.faqs.org/rfcs/rfc998.html) blows this one out of water.
An enhanced TCP that we use internally stacked up as follows w.r.t Normal TCP and FastTCP (times are for retreival of a 180KB file)
it's substantially faster than TCP because it doesn't provide any guarantees (e.g. error checking); consequently it's more difficult to work with. this protocol builds on top of UDP to provide all of the nice guarantees that TCP does.
This is really cool, I had tried making something like this a few years ago when I was working in a lab. Mine worked, but I never really got the time to perfect it.
The underlying problem is trying to achieve fairness not only between instances of the new protocol itself (which this implementation and many others claim), but also fairness with instances of the legacy TCP protocol (this protocol's claim seems to be a little weaker here).