Wednesday, June 23, 2010

Network Speeds - GBe, Wireless N and how they affect you.

A very significant debate in my mind, between different wireless (and wired) network technologies has been relating to effective speed.

What I mean by effective speed is two things; first, the speed you can literally get from the network (after overhead, crosstalk, and other factors). Second, the speed that's useful to the end-user.

Because of this, I end up in quite a conundrum... with server and back-end topologies and technologies, you generally know what kind of speed you'll need and what you can use. When connecting servers together, whether from scratch or to an existing network, you can surmise whether you'll need GBe, 802.2ad linked GBe, or a multi-GB connection (or even a 100MBit connection) for your server, depending on application. For example, a high-performance file server or database, you may want to put some of the more expensive connections onto, especially if the system will be used concurrently by many users, and the drive array can handle multiple gigabits of sustained simultaneous output to multiple destinations...

For servers, the job is pretty easy to deduce what you need, the hard part is not only finding the hardware you need (since 90% of computer shops carry consumer oriented products only), but getting management to sign-off on the purchase...

For client access roles and points, you really have to start debating, is one technology really better than another? let's review.

Almost all network access by end users (or consumers) is internet bound. not many people exist in a world where an intranet even exists, nevermind having servers setup on it, or accessing any "local" resources. With this in mind, I quickly begin to consider two things, first, how many people will be using the service, and what is the WAN speed?

WAN speed: most consumer based systems are using consumer based internet lines, which are generally not terribly fast. In North America, most consumer based broadband lines are between 3Mbit and 15Mbit. There are some exceptions to this, in cases of extremely fast or extremely slow internet lines, but for the most part, they fit into this model. In these cases I have to debate on the validity of buying the latest GBe router or switch, or the newest fanciest dual-band Wireless N router or AP. Since 90% of traffic is going to be internet bound, the fastest any one users connection will go, is 3-15Mbit. Current standards for wired internet technology is 100Mbit full duplex (or 100BaseTX), and currently the standard for wireless is 802.11g (or Wireless G) which runs at 54Mbit. Both of these show standard connection speeds that are 3-8 times FASTER than current internet speeds.

Factor all that into the fact that consumer based internet lines don't really seem to be getting any significant bump in speed, neither now, or in the near future, and you've found yourself in my debate.

If you're not using any resources on your local network, why do you need anything more than a 100BaseTX or 802.11g network? ... to be fair, wireless technologies will never run as fast as advertised, due to the fact that the send and receive happens on the same frequency, making the system half-duplex by nature (meaning you can only send OR receive, not both) but still, a half duplex connection can still sustain, even in high-traffic situations, something near 30-40% of it's maximum bandwidth (except in extreme scenarios).

Additionally, a lot of the technology that is touted as "Wireless N" is really just a beefed up Wireless G, that's been given similar encoding technology to Wireless N (making it possible to encode more data per wavelength of transmission, and therefore increasing throughput)... What I mean is that: 802.11n is designed to run on (or was originally designed to run on) higher frequencies, with shorter wavelengths (eventually, they settled on 5.8Ghz). With shorter wavelengths, and better encoding, it became possible to encode a significant amount of extra data into the stream than wave previously possible.

Allowing Wireless N on the same frequency as Wireless G, causes additional interference, since wireless G would take more time to transmit, and create more noise on the channels that Wireless N would be trying to use, and at the same time, Wireless N would be unintelligible noise to any Wireless G implementations nearby. The real conundrum is that to use Wireless N on 2.4Ghz effectively, you have to bump the channel width from 20Mhz, to 40Mhz. While using 'Channel 6' (the midpoint in N.America for wireless), with a "fat channel" (40Mhz), the radio then crosses over into almost every other wireless frequency, causing interference on every wireless "channel".

The bottom line with 2.4Ghz Wireless N, is that it would only really work in a controlled environment, where there is nearly no other 2.4Ghz networks or devices (this includes cordless phones).

Add that to the fact that the extra speed isn't making anything go faster, because you're using the 150-300Mbit 2.4Ghz Wireless N to access the internet, and you end up with this mis-mash of different, competing technologies, that completely ruin the experience for everyone (since they cause so much interference).

The only true benefit you could ever obtain from Wireless N, is in it's intended implimentation at 5.8Ghz (where there's very little demand, aka interference currently), while using dedicated 5.8Ghz ONLY devices and nodes. Additionally, you would have to use that wireless for accessing local resources; not just that, but you would have to make sure that your AP, and every link between you and the system you're talking to, is GBe, since Wireless N can fully saturate 100Mbit Ethernet... Then, on top of that, you almost have to be accessing an array of drives to really take full advantage of the throughput, since, even good conventional drives max out around 400ish MBit... That's not even touching how useless GBe would be to most users...

Yeah, I understand that, despite the bandwidth being not really necessary, GBe can reduce ping times because the speed of the transaction to transmit each packet is so short, however, the difference in real-world scenarios is negligible at best.

The real baffling thing, for me, is when there's respectable companies, that actually have intranets, with dozens of client systems, roaming profiles, network shares, VoIP, Internet, etc, all connected to the same network fiber, and they're still running on MB Ethernet. Thats. Just. Amazing. Upgrading to GBe in those scenarios would have massive impact, and the upgrade costs would be minimal at best. Since a lot of unmanaged switches are rather cheap, even with massive numbers of ports... Managed switches aren't too far behind in cost.

And really, in those scenarios, isn't the cost of the switch far outweighed by the increase in productivity of the workers? since now they don't have to wait forever for a roaming profile to load before they can actually do some work?

Food for thought.

No comments:

Post a Comment