Print

Print


I advocate an official MICE position that switch fabric hardware be dedicated to MICE but allow the board to make exceptions to this position on a case by case basis, where the requesting entity can plead a case and provide justification/explanation on why  they feel it necessary to provide remote access via a shared switch fabric, eg as in this case it would allow WiscNet to provide additional networks to connect to MICE that otherwise would not, while WiscNet otherwise has a clean reputation within MICE and  nobody has any reason to suspect they are up to anything shady, and they are intending to use modern hardware not some ancient 2000's era hunk o junk. The board can then measure the request and approve or reject the exception request as they deem best  for the exchange using their own prerogative.


As the request for remote switches is a fairly low volume in the first place and the request for remote switches that would be deployed as a mixed use switch is even lower, I suspect this would not cause an undue burden on the board to handle requests of  this nature. The board can easily dismiss frivolous requests, citing the relevant MICE policy.


I feel this would provide continuing good network stability as I agree that dedicated hardware would be less susceptible to problems stemming from mixed use conditions but still allowing the best benefit to the exchange as a whole by allowing it to happen  with control when it makes sense. The requesting entity still needs to go through the regular process of discussion on the list and what not as well, it would just need the extra step of being pre-screened by the board for shared use fabric switches.






From: MICE Discuss <[log in to unmask]> on behalf of Gary Glissendorf <[log in to unmask]>
Sent: Monday, December 2, 2019 4:27 PM
To: [log in to unmask]
Subject: Re: [MICE-DISCUSS] WiscNet Remote Switch / Dedicated Remote Switches
 

SDN would be in support of a 1 mac per external interface from the a participant connected at the Core or a connected to a Remote switch.  A standard practice is to isolate any broadcast chances  leaving a providers network put placing a point-point subnet  or point-point VLAN on the interfaces directly leaving the network.  But in the case of connecting to a Exchange and the need to support connectivity into a L2 topology the opportunities is broadcast  loops/storms does increase.  SDN was a early participant in the Exchange and I recall outages that we started minimizing traffic that was traversing the exchange.  In the recent years the exchange has become a important source for content and keeping the network  stable and secure is a benefit to all of the participants. 

 

Gary Glissendorf | Network Architect

[log in to unmask]" style="color: blue; text-decoration: underline;">[log in to unmask]
2900 W. 10th St.
| Sioux Falls, SD 57104
(w) 605.978.3558   | (c) 605.359-3737 | (tf) 800.247.1442
SDN NOC 877.287.8023
NOC Support email: [log in to unmask]" style="color: blue; text-decoration: underline;"> [log in to unmask]


“Ausgezeichnet Zueinander Sein”



 
cid:<a href=[log in to unmask]" style="width: 0.2604in; height: 0.2604in; user-select: none;" width="25" height="25" border="0" src="/cgi-bin/wa?A3=ind1912&L=MICE-DISCUSS&E=base64&P=238108&B=--_008_1a1fe6ae5f7c4b6f92796889b2ca8378usinternetcom_&T=image%2Fjpeg;%20name=%22image001.jpg%22&N=image001.jpg">   cid:<a href=[log in to unmask]" style="width: 0.2604in; height: 0.2604in; user-select: none;" width="25" height="25" border="0" src="/cgi-bin/wa?A3=ind1912&L=MICE-DISCUSS&E=base64&P=241300&B=--_008_1a1fe6ae5f7c4b6f92796889b2ca8378usinternetcom_&T=image%2Fjpeg;%20name=%22image002.jpg%22&N=image002.jpg">   cid:<a href=[log in to unmask]" style="width: 0.2604in; height: 0.2604in; user-select: none;" width="25" height="25" border="0" src="/cgi-bin/wa?A3=ind1912&L=MICE-DISCUSS&E=base64&P=244723&B=--_008_1a1fe6ae5f7c4b6f92796889b2ca8378usinternetcom_&T=image%2Fjpeg;%20name=%22image003.jpg%22&N=image003.jpg">   cid:<a href=[log in to unmask]" style="width: 0.3125in; height: 0.2604in; user-select: none;" width="30" height="25" border="0" src="/cgi-bin/wa?A3=ind1912&L=MICE-DISCUSS&E=base64&P=248596&B=--_008_1a1fe6ae5f7c4b6f92796889b2ca8378usinternetcom_&T=image%2Fjpeg;%20name=%22image004.jpg%22&N=image004.jpg">  cid:<a href=[log in to unmask]" style="width: 0.6145in; height: 0.2604in; user-select: none;" width="59" height="25" border="0" src="/cgi-bin/wa?A3=ind1912&L=MICE-DISCUSS&E=base64&P=252453&B=--_008_1a1fe6ae5f7c4b6f92796889b2ca8378usinternetcom_&T=image%2Fjpeg;%20name=%22image005.jpg%22&N=image005.jpg">

 

From: MICE Discuss <[log in to unmask]> On Behalf Of Jeremy Lumby
Sent: Monday, December 2, 2019 4:13 PM
To: [log in to unmask]
Subject: Re: [MICE-DISCUSS] WiscNet Remote Switch / Dedicated Remote Switches

 

CAUTION: This email originated from outside of SDN Communications. Exercise caution when opening attachments or clicking links, especially from unknown senders.

 

                One reason I am very much in favor of dedicated hardware is isolation of the unknown.  As any seasoned network professional knows, things rarely fail in a way that  you expect.  The more complicated things are, the more likely that unintended things can happen, especially in the case of a failure, and the more functions a piece of equipment is doing the more widespread the failure. 

 

                I think those of us that have been on MICE for some time can even remember some examples.  The one that comes to mind is when the MICE core was 100% Juniper.  We  would have IX wide outages caused by the Juniper switches flooding everyone’s ports to capacity with other member’s unicast traffic.  This was obviously not a feature, user error, or a setting in the switch.  On at least one occasion a software update was  done that specifically stated it fixed the bug, and unfortunately it did not.  Some of these Juniper switches are still in operation in the MICE core for the 1G ports.  Since the switches were connected with proprietary Juniper stacking cables, they acted  as one amplifying the problem.  The switches still have the unicast flooding problem 1-2 times per year.  The difference is that it is isolated to those 1G members, so it is not as noticeable since it is isolated, and no longer impacting the majority of members  on the IX.  We see that the larger networks/CDNs that connect to MICE always request as much diversity as possible whether it be on separate linecards, or switches.

 

From: MICE Discuss [mailto:[log in to unmask]] On Behalf Of Ben Wiechman
Sent: Monday, December 02, 2019 2:18 PM
To: [log in to unmask]
Subject: Re: [MICE-DISCUSS] WiscNet Remote Switch / Dedicated Remote Switches

 

My concern would be protecting the integrity of the exchange. Just because there have been no issues, doesn't mean there will be no issues at some point. This doesn't necessarily require dedicated hardware, nor does dedicated hardware necessarily guarantee  this. A request to disclose practices in place today sent to those that currently maintain shared extension switches would be a good starting point in achieving a better understanding of the potential risks and help identify potential best practices that are  already in place in addition to the suggestions already offered by Steve. 

 

Ben Wiechman

Director of IP Strategy and Engineering

320.247.3224 | [log in to unmask]

Arvig | 224 East Main Street | Melrose, MN 56352 | arvig.com

 

 

 

On Mon, Dec 2, 2019 at 12:09 PM Jeremy Lumby <[log in to unmask]> wrote:

In my opinion the single MAC address per participant facing MICE is the most important requirement for the protection of MICE.  I feel it is so important that I had contacted the  board/received approval so that I could restrict my extension switch to 1 MAC address per participant regardless of if MICE requires it or not.  I think short term exceptions for maintenance/upgrades are fine.  I feel that trying to implement the 1 MAC address  rule on a shared extension switch would be very problematic (mostly for other traffic that could be going across the same switch). 

 

I do not feel that the added cost/complexity of requiring a dedicated extension switch would deter any network from connecting.  As an extension switch operator that offers multiple  different services I offer MICE completely for free to anyone with no requirements that they connect to anything other than MICE.  With this being said they are responsible to getting one connection to me, however once I receive that connection all services  are broken out onto separate dedicated hardware free of charge in a manner that would satisfy any IX that allows a connection to anything other than their own core switches. 

 

From: MICE Discuss [mailto:[log in to unmask]] On Behalf Of Steve Howard
Sent: Monday, December 02, 2019 10:31 AM
To: [log in to unmask]
Subject: Re: [MICE-DISCUSS] WiscNet Remote Switch / Dedicated Remote Switches

 

I am not in favor of requiring MICE remote switches to be dedicated solely to MICE.

I think the proposed policy adds unnecessary complexity and costs (one-time hardware, monthly cross-connects, space at 511, and environmental) for remote switch operators.  It also discourages additional membership (WiscNet) and, in my opinion, results in very  little real value to MICE.  Why institute a policy with real costs to MICE members and not much real value?

Sometimes the MICE community seems to worship larger exchanges, most notably, SIX and AMS-IX.  MICE should make decisions based upon what is best for MICE.  Just because the “big” exchanges do something doesn’t necessarily mean it is best for MICE.   We are  not SIX.  We are not AMS-IX.  Sometimes it is better to be a leader at not a follower!


I’d rather not have unnecessary rules and burdens.  I’m not exactly sure what real problem this proposed rule is trying to solve.  But, if there truly is a technical problem here that needs to be solved, I’d like to propose a couple of suggestions for discussion  that could help protect the exchange:

1)  If supported by the remote switch, enforce a specific MAC address requirement on the MICE VLAN for remote switches.  Typically this would be one MAC address on the MICE VLAN, but there are a few cases where an additional MAC would be required.  This could  create some challenges when changing routers, but, in many cases, these can be resolved with some prior planning.  For example:  Paul Bunyan uses a switched virtual interface (a Cisco BVI) for its connection to SIX.  This gives us the flexibility to move that  interface to any device without having to contact people at SIX.  Yet, it still gives SIX the protection of a single MAC address per port.  MICE could also have a policy to briefly allow an extra MAC address when there is planned maintenance.

2)  MICE could institute a “probationary” status for remote switches that have a history of problems at MICE.  MICE has had non-dedicated remote switches from its beginning.  To the best of my knowledge, this has never caused a problem.  However, we’ve all  seen the recent history of a single MICE remote switch operator that has had issues with packet loss, apparently slow downtime responses, etc.  Fortunately, those problems have only affected their customers.  Remote switch operators that have a history of  poor performance could have additional requirements placed upon them until they’ve operated well for a period of time.  Additionally, if a remote switch operator’s non-dedicated use causes problems at MICE, the remote switch operator could lose the privilege  of having a non-dedicated switch.

3)  MICE could determine an appropriate level of broadcast traffic and institute broadscast storm control based upon a multiplier of of the typical levels.

Disclaimers:  I work for Paul Bunyan Communications which connects to MICE via the CNS remote switch; Paul Bunyan is a founder and part-owner of CNS; I help manage the CNS remote switch; I am one of the co-founders of MICE.  I try to be loyal to Paul Bunyan,  CNS, and MICE.  It gets complicated sometimes!

 


To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1

 


To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1

 


To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1

 


To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1

***This message and any attachments are solely for the intended recipient. If you are not the intended recipient, disclosure, copying, use or distribution of the information included in this message is prohibited -- Please immediately and permanently delete.***

To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1



To unsubscribe from the MICE-DISCUSS list, click the following link:
http://lists.iphouse.net/cgi-bin/wa?SUBED1=MICE-DISCUSS&A=1