Quote:
Originally Posted by keithwalton
GBit was never intendid for raw point to point speed, but more being able to handle multiple simultaneous connections over the same lines. That and from programs running in memory linked to the memory on other machines.
There are multiple bottlenecks in your config,
hard drives will be the main one, in single drive mode peak brand new hughe ide / sata ones do 70mb/sec, raid 0 them and that figure nearly doubles.
15k rpm scsi drives do around 100mb/sec.
The way your gbit controller is connected is also usually a bottleneck.
If your using a pci card that has a max b/w of 133mb/sec (just over 1GBit) and this is shared with all other devices connected to the pci bus (usually hard drive controllers in older machines are as well)
Some motherboards connect GBe via the pci or pci-e bus to the southbridge (then north -> south bridge links can be a bottleneck).
The best implementation so far is in intel server (and some desktop boards) called csa, which connects the GBe to the northbridge
|
Fair enough and I understand that only really put it in because I needed to replace my 10/100 switch and it wasn't much more expensive... Just wanted to know if the speeds I was pushing though it was resonable for off the shelf consumer kit and if they were about on par with what anyone else was getting..
Its quite nice to be able to write to a drive on the storage machine almost as quickly, if not as quickly as if it were a local drive