Author
|
Thread
|
|
|
|
|
|
|
|
|
Fatal0E
pwn3d
Joined: 16 Dec 2004
Posts: 143
Location: OKC
|
In our case we have six identical servers. The virtual machines are monitored by vCenter. It monitors the load on the hosts and automatically moves vms around to keep it spread evenly. It also monitors for potential problems, and under certain conditions will move all the vms off a host if it predicts a failure. If a host fails unexpectedly, depending on what happened, the vms may be brought up automatically on the other hosts, or it may require manual intervention. All of our storage is centralized, nothing is on the host itself, the host can melt to the ground and all should be well.
You can make rules to help ensure systems stay online. We have rules to ensure the SQL cluster servers, AD controllers, etc, are not allowed on the same host
All of our data is also replicated to another site, with another cluster. Soon we will be able to bring everything online from another site if the primary site is erased by a tornado. _________________ Core i7 4790K
16GB GSkill 1866
Geforce GTX 970
|
Thu Jan 07, 2016 10:01 am
|
|
|
|
Shinare
SEXNOCULAR
Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
|
|
|
Extreme HA is not really needed in my situation, but of course as close to %100 uptime is. At least as reasonably as possible. I find it interresting, and excellent, that you can have two completely different servers and move the VM between the two.
For redundancy I think I could probably get away with a very cheap server with cut down specs. For instance, just 4x 4TB sas mechanical drives on a server with a processor and some memory. I think we could limp along on that if the super massive server cratered and needed to be repaired.
I know I'm a few years late to the server virtualization party, but in my job change happes on a geologic time scale. However, I'm really starting to see how extremely cool it all is. heh
So could the secondary server act as a hot spare/backup of the VM's of the primary? Meaning, if the big guy failed, I could just walk into the server room and "flip over" to the crutch? Like real time synchronization? Or would I need some kind of shared storage solution for that, like a SAN? _________________ For with what measure you measure it will be measured to you.
|
Thu Jan 07, 2016 4:25 pm
|
|
|
|
Sevnn
Candy Cane King
Joined: 22 Mar 2003
Posts: 7711
Location: Kyrat
|
|
|
We have a few host machines, one bad ass one, one medium powered one that is coming up for HA, one older one we put less critical VMs on, and a medium powered offsite that gets copies of the data. We are running local storage so we can't vMotion the machines (move them while running) between hosts but it doesn't take long to shut one down and copy the images over. We have a SAN in the works and once that is done we'll be able to vMotion between the hosts while their are still running and responding to requests. Our offsite gets a copy of the machines on a regular basis and can be brought up with a small amount of network and VM reconfig.
Shinare, the biggest issue you will have with your plan is that the virtual images (files that represent the hard drives of the guest OS) will have to be copied over if you needed to move to the second machine. As long as the bad boy is able to hold out long enough for that to happen you'll be OK with minimal downtime. If you need the ability to move over to the second machine while hot you'll need a SAN. There are reasonable priced options out there for SAN solutions and iSCSI might be an option as well. If I was building something like you are talking about, I would buy 2 medium powered machines with a cheap'ish SAN solution. That way you can live swap the images, you'll have hardware redundancy on the computing side of the hardware as well as being able to separate domain controllers, etc. You'll be able to move machines around if load is not balanced, and one machine should still be plenty bulky enough to run all the systems if the other needed repairs.
|
Thu Jan 07, 2016 6:23 pm
|
|
|
|
|
|
|
|
|
Shinare
SEXNOCULAR
Joined: 17 Mar 2004
Posts: 13332
Location: Up your butt with a coconut!!
|
|
|
LightningCrash wrote: Shinare wrote: Hey LC, I do have one of these on hand with 1TB drives in it and I'm currently using as the big bucket storage for my test ESXi host. Working flawlessly, however its fairly old and I dont see "multi-initiator" on it anywhere, hehe. I would probably go the "i" SAN route if it ever came time for me to look into a serious production setup.
Didn't realize you were going back that far on connections I'd set another server's scsi card bus id to something unused (your main server is probably configured as ID 7) and plug it in.
If you want something random to test it on before the main array, I have some old Sun multipack 711s that are scsi you can try out.
LOL well, funny story about that. Way back in 2005 we got an EMC CX300 FC SAN with 2 Silkworm 200E fiber channel switches. That was connected to 5 servers each with 2x FC HBA's in each server for fault tollerance and teaming. that SAN was a single shelf with 10x 146GB 10k SCSI drives in it. Unfortunately, a few years after its purchase it was clear that it was woefully undersized for our needs and we needed (in almost an emergancy way) more space. The great thing touted to us about the SAN was that it was SUPER EASY to add a shelf of more drives to immediately increase available space. So we had Dell quote us up another shelf of drives and it was going to be $20k for another 1TB of data. 1TB wasn't going to cut it, so we were going to need multiple shelves. Only two servers needed more storage so I found the above quoted 2 channel U320 DAS enclosure that could hold 16x 1TB sata drives (the biggest available at the time) that offered the ability to host one volume for one server on channel 1 and another volume for another server on channel 2 (each server with a single u320 card). The cost on that, fully populated with 1TB drives was <$5k. So, $5k or >$80k... The choice was easy for this cheapass.
1TB of total space on that mothballed EMC SAN and PCIX FC HBA's are keeping me from going that route (PCIe slots in my current servers). *sigh*
Edit: RE: the 711's; Thanks for the offer! However, if you had a couple PCIe u320 HBA's colledting dust on a shelf somewhere I could play with I could maybe use that DAS in a simmilar way. _________________ For with what measure you measure it will be measured to you.
|
Thu Jan 21, 2016 10:34 am
|
|
|
|
|