My office has (way too many!) all my networked devices arranged above and below a series of benches. This has the drops tediously close together (24 in the ~100sq ft of floor space occupied by the workbenches).
To make matters worse, I've secured the cables to the underside of the benches (floor is covered with equipment and power cords) which requires a contortionist to access them (e.g., to add/remove/replace).
Because it's all star topology (Gbe), every drop is essentially "trimmed-to-length" to reflect the actual distance from that device to the switch (big service loops would just increase the clutter under the benches).
So, when I opt to move a piece of kit more than a foot or so, I have to remove that cable and replace it with one that's a foot or so longer/shorter! It takes the better part of a day to make these sorts of adjustments (because everything under the benches has to be moved out of the way to allow me to crawl under and fish the new cable through!)
[I had to add two such cables yesterday to accommodate two more bits of kit -- a day "wasted"! And, I've now run out of ports on the 24p switch!]
Rather than replace the 24p switch with a 48p unit, I think a smarter move may be to replace it with a 16 and two 8's (for 32p). And, then I can spread them out a bit so I don't have a bundle of 24 (or 32) cables coming to ONE location!
The only real downside I can see (other than two more power cords that have to find their way back to the UPS that powers the existing switch; the need for two more "mounting places" for switches; and two more IP addresses) is that I'd potentially be limiting the aggregate network bandwidth (for theoretical full mesh connectivity). I.e., all the devices on ONE switch would have to share a link to the group of devices serviced by the "next" switch, etc.
But, as it's typically just me consuming bandwidth, there, I can probably arrange my needs so that I'm only really hammering away between two nodes at a time (e.g., file/volume transfers).
In that case, if both devices are on the same switch, there's no difference from my current setup. And, if they are on different switches, there will be a slightly longer transit delay (to cross to the "other" switch) but bandwidth should be the same as if on the same switch (cuz no other traffic would be sharing that bridge link).
But I'm wondering if there are other subtle differences that will piss me off when I uncover them (AFTER I will have spent TWO OR THREE days rewiring everything!). The first thing that comes to mind is I'll probably end up with consumer-grade switches (do they make any enterprise kit that is that small -- 8/16p?) and they may be more temperamental (no fans, wall-wart power supplies, etc.)
[(sigh) Oh for the days of 10Base2!]
Anything else?
To make matters worse, I've secured the cables to the underside of the benches (floor is covered with equipment and power cords) which requires a contortionist to access them (e.g., to add/remove/replace).
Because it's all star topology (Gbe), every drop is essentially "trimmed-to-length" to reflect the actual distance from that device to the switch (big service loops would just increase the clutter under the benches).
So, when I opt to move a piece of kit more than a foot or so, I have to remove that cable and replace it with one that's a foot or so longer/shorter! It takes the better part of a day to make these sorts of adjustments (because everything under the benches has to be moved out of the way to allow me to crawl under and fish the new cable through!)
[I had to add two such cables yesterday to accommodate two more bits of kit -- a day "wasted"! And, I've now run out of ports on the 24p switch!]
Rather than replace the 24p switch with a 48p unit, I think a smarter move may be to replace it with a 16 and two 8's (for 32p). And, then I can spread them out a bit so I don't have a bundle of 24 (or 32) cables coming to ONE location!
The only real downside I can see (other than two more power cords that have to find their way back to the UPS that powers the existing switch; the need for two more "mounting places" for switches; and two more IP addresses) is that I'd potentially be limiting the aggregate network bandwidth (for theoretical full mesh connectivity). I.e., all the devices on ONE switch would have to share a link to the group of devices serviced by the "next" switch, etc.
But, as it's typically just me consuming bandwidth, there, I can probably arrange my needs so that I'm only really hammering away between two nodes at a time (e.g., file/volume transfers).
In that case, if both devices are on the same switch, there's no difference from my current setup. And, if they are on different switches, there will be a slightly longer transit delay (to cross to the "other" switch) but bandwidth should be the same as if on the same switch (cuz no other traffic would be sharing that bridge link).
But I'm wondering if there are other subtle differences that will piss me off when I uncover them (AFTER I will have spent TWO OR THREE days rewiring everything!). The first thing that comes to mind is I'll probably end up with consumer-grade switches (do they make any enterprise kit that is that small -- 8/16p?) and they may be more temperamental (no fans, wall-wart power supplies, etc.)
[(sigh) Oh for the days of 10Base2!]
Anything else?
Comment