Services affected (warnings or errors): Three unrelated events created a perfect storm of critical pages. 5:18am first issue (load balancer). First pages start streaming in soon after. Websites, ipMom, webmail behind our load balancer on our cluster were offline (single issue dealing with load balancers). Email flow via systems behind our load balancer (double issue dealing with load balancers and MySQL Cluster service interruption). Email flow via systems not load balanced (MySQL Cluster interruption). A web server had kernel paniced (single issue related to itself only and would not have affected web services). MySQL Cluster shutdown affecting most ipHouse internal operations dealing with email, ipMom, etc (single issue dealing with NFS fileserver, unknown issue). 6:25am all issues have been remediated (load blancer, kernel paniced webserver, MySQL Cluster) and all services are online. Reason for service degradation: This is the difficult area to describe because multiple things happened concurrently in the timeframe above. At first I really thought our monitoring systems (both) had started flaking out as they weren't about the same kinds of things. Usually during a failure moment and pages start to come in, there is a pattern and it doesn't take a leap to see where the potential issue may lie. Started receiving pages and got up to look into things and found that the active load balancer wasn't able to reach the nodes behind it so the virtual servers (in load balancer speak) were considered failed and taken offline. I initiated a manual failover from active to standby and cut over to the other load balancer and recovery occurred. This is really getting on my nerves and my open ticket from a month ago is still open with F5. This mornings problem does *not* match the previous issue(s) though. This is all new symptoms and a different problem (I think). The 'then active' load balancer (prior to cutover) could not reach the systems on the internal network. When I did the manual cutover making the other load balancer active .. everything recovered and it could reach the internal network again. One webserver had kernel paniced and was crashed. HuH? This is a completely random occurence and doesn't seem to be a product of any other issue this morning except to create confusion and delay. Very weird considering I haven't really seen many UNIX kernel panics on a virtualized platform. Cosmic rays? Single (NFS) filesystem burp (drive mapping out a bad block, 3 notifications in a single second) for our internal virtualization cluster caused the MySQL Cluster to shut down (cleanly). This is the weirdest of the issues as there are 6 nodes in 2 groups stored on 3 different NFS servers, specifically separated so that a single event should not cause a problem. Guess that design failed or MySQL Cluster isn't nearly as resilient as the documentation (or my months of testing) suggests. Thankfully the cluster operated as it was supposed to in terms of clean shutdown but it should not have shutdown at all. I have lots of logs to look at to see if I can find a root cause for the initiated shutdown. The MySQL Cluster has been configured to support multiple data node failures without interrupting operations. As I mentioned, there are 6 nodes split into 2 3-node groups. Each 'pair' of nodes has their virtual disks stored on a different NFS server. This spreads the I/O load as well as creating redundancy over and above the cluster configuration. There should not be an interruption as long as 1 pair of nodes is online. I'll need to do a blog post today to describe the network layout with a graphic to give a picture to words. You can read it by pointing your browser to http://www.iphouse.com/blog/mike/ and I welcome your comments. Support can be reached Monday thru Friday from 8:00am until 6:00pm via phone at 612-337-6340, or via email at [log in to unmask] -- Mike Horwath ipHouse - Welcome home! [log in to unmask] The universe is an island, surrounded by whatever it is that surrounds universes. - Berkeley Fortune