> Some time after 3:15 this afternoon [10/5/2015] we began > experiencing what appears to be a BGP routing issue within our core > network. We are in the midst of tracking down the root cause, and > will send updates as we have them. We have identified the root cause and eliminated it to prevent any future causes due to it. When we upgraded our routing core to 10Gbps gear in the window several years ago on 5/5/2013, one of our access routers moved from copper to fiber connectivity into the core (connecting to a fiber 10Gbps switch stack). Unfortunately, it appears the old copper feeds from that access router were still up and talking to the wrong part of the network. Something appears to have flapped the interface of that access router, kicking over bad routing in a bad area that hadn't raised a problem in 2.5 years. Those old copper links have been removed and stricken from the access router so that they could not cause any problems in the future. Once we isolated that access router, which mainly feeds VMForge Virtual Machines and "Cage #14" colocation machines, the rest of the network stabalized, and once the copper links were removed from that access router, VMForge and "Cage #14" colocation came online fully as well. This was completed by 16:30 CDT. If you have any further problems or questions please let us know at [log in to unmask], or call us up at 612-337-6340. Thank you. -- Doug McIntyre <[log in to unmask]> -- ipHouse/Goldengate/Bitstream/ProNS -- Network Engineer/Provisioning/Jack of all Trades