Contact Me
Home arrow Articles on toningenieur.info arrow Where lagg can't help you
Where lagg can't help you Print E-mail
Feb 25, 2012 at 01:44 AM

The last real world speed bump in ethernet technology was when 1GBit/s interfaces became standard in servers about 10 years ago. Not so much happened since then, in fact I only once worked with a system capable of 10GBit/s on a single link. When I learned how much power a 10GBit/s copper link draws compared to a common 1GBit link, I didn't really wanted to dive too deep into that in my private testbed.

So if 1GBit/s wasn't enough, I usually just scaled by increasing the number of interfaces and bonding them through the FreeBSD implementation of trunking - lagg - in combination with the lacp keyword. This works ok if the channel itself is cheap, that is, a complete link (if + cable + switchport + power consumption) as it usually is, when the servers are situated close to each other.
It's not really what you want, when the single channel becomes more expensive, most often through increased distances and therefor being forced to use optical fibres.

Today even medium performance hardware can push a significant amount of bits/time through its network interfaces - I have an example of a small backup server running rsyncd during the initial sync, mainly for a beefy storage box a few hundred meters away. The material's filesize is centered around 20MB, but has a high variance. Despite this, the external disk subsystem should be able to perform at ≥ 160MB/s with this kind of load, at least that's what it does via lo0. But even with smaller files, the only available (optical lc interconnect between two switches) link in this case is saturated most of the time.

The average sustaining rate is pretty close to the peak traffic in this screen shot and it has been like that for the last hour. Not a preferable situation from my point of view, where intranet network bandwidth should never ever be maxed out, but the infrastructure is from 2006. Due to our customers budget constraints we could only replace the server hardware as the old stuff was simply dying. Still a few terabytes to go, but for now it's ok, fortunately the backup time window is huge, but we'll see how this will develop over the next six years 8)

<Previous   Next>
Page generated in 0.003101 seconds