Virtual vs Physical

At work I am deploying a growing number of servers for both production use and development & testing. One of the things we are doing is paying more attention to replicating our production environment at every step of the development and deployment environments. That means allot more servers. How much is allot? We added 25% more gear enterprise wide in the month of August. We have been busy, and we are not yet done. I have filled one computer room at a facility and we are bringing online another at that location sooner than originally planned. We also moved into a larger cage at our data center earlier this summer. Then we added cabinets to the new cage to accommodate the larger growth. In yet another office we added a cabinet, and are working on beefing up the power in the room to accommodate yet more gear. Then we need to look at more HVAC. I never had to deal with ancillary issues such as not having enough power to run gear. 5 years ago I would never have thought I would be in a situation like this. It is very interesting to me to look at how many rack U’s a server takes up when quoting them out and determining what to buy based on the cost of a 2 vs 4U server and how much it would cost to just add another cabinet if you got the bigger gear. As a plain old engineer I would just recommend buy this server because it did the job. In my current position I need to look at the entire picture.

The next few months will bring another burst of server sprawl. One of the things we are looking at is the cost of buying hardware for every server role we need to fill, or the cost to do the same amount of computing power in virtual machines. I must sound like a broken record talking about one of my favorite software companies, VMWare. Our GSX server has served us well, and is a great proof of concept to show how we can expand the use of VM’s. I don’t think we will deploy VM’s en-mass at our data center to do production work, but we have plenty of other uses for the technology elsewhere that makes looking into GSX or ESX server a viable alternative to buying more gear.

To me it boils down to 2 factors. 1 is of course cost. How much does it cost us to buy and deploy a dozen servers, power them, get KVM’s, and rack space for them, vs purchasing hardware for a VM server (or 2) that can handle the same amount of work load.

The 2nd factor is ease of use. How quickly can we get build a physical server for use? Restore it from a preset level of configuration for use in dev and qa? How fast can we buy and deploy hardware when a need comes up for a new server? The same questions apply for virtual machines.

I am a bit biased. I want to virtual machines. The flexibility they give you is amazing. I also know that I have SLA’s to keep, and costs to consider. So if the per server (or server instance) costs are too high we can’t do it. For now I spoke with my boss late this week to identify what applications need homes in what environments. The next step is to crunch the numbers to get all of our options. The VMware user groups have been helpful in figuring out realistically how many VM’s you can get per GSX and ESX server. More news as the project unfolds.

HL DL 320 SATA Servers

I have been working with a few HP Prolient DL 320 SATA servers. The price was right on them and they have decent specs. The issue I had with the last round of Supermicro SATA box’s weren’t the supermicro box’s them selves, but the SATA RAID cards that went into them. They would fail much more than their SCSI counterparts. Also the array controllers would not rebuild without crashing computers. We tried several brands. We are using 3ware for new deployments of older chassis SATA servers. They seem the best out of all I have seen. They were a crap shoot. These HP box’s seem to rebuild fine in our tests. Time will tell if they drives hold up, but so far I think HP finally got it right with a low end non SCSI RAID system.

More Data Cleanup

I moved all my files off of one of my firewire drives this week. This was so I can format it to work on my Mini. Dam NTFS not working on the mac. It took 36 hours to robocopy the data off of the firewire drive onto another USB drive. Even-though it was a USB 2.0 drive, I think it was running at 1.1 speeds. 36 hours was an awfully long time to copy 90 gigs of stuff. Even if it was mostly small files.

I am in the process of backing up old backup files to DVD’s. I am also just deleting tons of old crap that I don’t need anymore.

SATA RAID?

Not all RAID cards are made the same. This is especially true for SATA raid. I have deployed over 2 dozen SATA & IDE RAID servers in recent history (mainly SATA, but a few IDE box’s). My recent opinion of them is that they work great until the break. When they break they break hard. You begin to wonder if RAID ever worked right when dealing with SATA RAID. Then you go back to an HP (or even Dell) SCSI RAID server and after 10 minutes of using it, you feel like you found religion or something.

I have used Promise & Adaptec 2 & 4 drive SATA Raid cards and both kind of suck. I have used many Adaptec cards on numerous servers and I have had nothing but problems with them. Today marked the 4th 4 drive 2410SA card that has killed a server. Hey Adaptec, when a RAID card capable of RAID5 loses a drive, it is supposed to continue to function. That is the whole idea of RAID5. Not sure if you know that from the performance of your cards. When losing a drive (1 out of 4 btw) the RAID array is supposed to stay up, the server should continue to function, and when you reboot you are not supposed to have a blank configuration on your card.

Also is it written somewhere that I cannot find that says the 1210SA card does not support hot swappable drives? Because if I buy a server that allows for hot swappable drives, I expect my RAID card (even if it is a cheap SATA card, hey it is still RAID) to like rebuild when I put a new drive in a machine. Oh and it would be really nice if the card would rebuild without being told to. I mean Dell, & HP cards do that. Why do I have to invoke a rebuild every time we have a bad drive?

I will be honest I was surprised that I have had such problems with the Adaptec cards since I am generally a fan of their products. I just don’t know if SATA RAID is not fully baked in general? I have been a fan of SATA for a while. it is a great cheap alternative to SCSI, but a RAID system should be reliable. I can understand if you lose drives quicker on a cheaper SATA system, and that happens all the time for me compared to SCSI drive systems, but the problems I have been seeing go beyond just dead drives. So far the only cards I have seen that show signs of progress are the 3Ware 2 drive cards, and some new embedded RAID cards on HP SATA servers. Ironically Jayson tells me the HP cards are in fact Adaptec or Intel. Go figure.

Definitions are thanks to a great little site called Wikipedia!

Mac or Thinkpad?

Mac or Thinkpad T-43? What do I take with me for a week away? For personal trips I would take the Mac hands down. Now I am going away for work next week. On one hand I get the Thinkpad from work, and I like it. On the other hand I have been working off the Powerbook as my primary machine since I got it. I use it as my main email, web, chat, and document editing platform. That is allot of my day, but not all of it. I still use my Thinkpad or desktop PC for terminal sessions (not a huge fan of the mac RDP client, and the MMC plug in for remote desktop still rocks), and VMWare. That is allot of my day also.

So what to do? The Thinkpad has IP Communicator on it, so I can VPN into work and then use my phone extension. My powerbook has Skype, and the Xten VOIP client for Broadvoice if I choose to setup my account for it. I can use bluetooth headsets with both machines, but the mac works better hands down. The Thinkpad has a bigger screen, and since it is work property I am less concerned about beating it up (but I know I still care so that is not such a big issue). Of course the bigger screen is also harder to see since its resolution is so small. Bad for me and my glasses:(

Thinkpad had good battery life and I have 2 batteries. Shall I go on? I am thinking if the Powerbook works fine with the new Cisco VPN client I will just take it. I have grown accustom to using it. Even though somewhere in the back of my head part of me says take the Thinkpad.

Why do I care what I take? Why should you? For me it is what will I use as my lifeline to the office while I am away for a week. Hopefully wont need it, but if I do it is a big deal. Why should others care? I don’t know. I felt like writing about what I was thinking. Also it kind of boils down to the age old question, Mac or Windows? For me the answer is both if you can, but if you have to choose I think I will edge over to the Mac side!!!

Dual Headed Monitor

Chris demoed me a dual monitor setup a few weeks ago when I was visiting. I know dual monitors is nothing new for most tech people, myself included. I remember when my cousin Wayne showed me it once a several years ago with 2 15″ flat panels. This was back when 1 flat screen was a big deal, but 2 was just decadent.

I had played around with the dual monitor concept just once over a year ago. I setup 2 flat panels (15″ Samsung’s) side by side. It was cool, but took up desk space. my thoughts at the time was why not just have 2 computers setup? And that is exactly what I did. I had 2 monitors with 2 computer setups. one with a KVM and up to 4 computers and another one for my laptop. Yes I know I was geeking out. But this was for work, so that makes it ok.

Recently as a few weeks ago Jayson had tried the dual monitor thing, but gave it up because of desk space issues.

When seeing Chris’s setup I realized the advantage of actually using two displays. One for general work stuff, and the other for terminal sessions (in full screen) or leaving our monitoring tools up on screen all day. I found a video card I could use (already with dual monitor support built in so I had no need to find a PCI card to go along with my AGP one) and put it in my desktop box. Besides having the dual head setup looking really cool with the matrix screen saver running on both monitors, it was also practical since I left our monitoring “threat board” up on a screen all day. I did loose some desk space, but I got rid of the monitor stand I had so I think it evens things out. Not sure if I will keep this setup, but for now it is working out for me!

Data Center Move Completed

Jayson and I finished up our data center move this morning. We have allot of free space that Bob will use up right away. It is also as neat as we are going to get it. We had some interesting moments configuring up the network. We were putting in Gigabit uplinks between all the switches and ran into some problems. The ports were configured weird by Keith. Jayson didn’t understand half the crap that was setup, and neither did i. We cleaned it up and we were good to go.

We also had a problem with one of our servers Promise IDE disk array. this is like the 3rd time we have had trouble with this array, and the incalculable time we have had trouble with these IDE arrays. We spent hours trying to make it work, but it does not seem to have worked. Danny and our DBA staff was working on some alternatives I suggested today while Jay and I got some sleep.

As it stands now that server is still causing problems. Contingency plans are already in the works to replace it:(

Data Center Update

After working all day Saturday, Monday night, and Tuesday night, we are about 90% done with our move to the new cage. We have 3 servers and 3 desktops (don’t ask) to move, along with a core switch and the firewalls. We will do that Thursday night. The new cage looks good. Some of the cable management could be better, but Jayson assures me that it will be cleaned up when we are done. We already started taking apart the old cage. it is/was a mess.

Since Saturday I have done nothing else but deal with this move. Once it is over I will be very happy. The good news is we think we were able to squeeze out even more space savings than we originally thought. that is good news since we are planning on expanding with a bunch of new servers soon.

Data Center Work

Yesterday I got the call (email actually) that my companies new cage in our Data Center was ready for use. Jayson and I quickly got a Uhal van to move all the new gear from our office up to the new cage so we can stage everything. I ended up doing most of the lifting since Jayson had to stay with the truck. He couldn’t find parking. After dropping everything off at the cage we took the truck back.

Today we (along with Jon) will break out the gear and mount the shelves and rack kits. This will set us up for the server move later next week.

I am not used to being up this early on a weekend. Hopefully I am fully awake for the work:)

Incremental Update

I admit I have done it again. Last week I quietly went to the Apple store and got a new Powerbook. It has been over a year since I bought my previous one. I was going to try and not upgrade but the price was right and I was so tempted. I went from a 1ghz 12″, to a 1.5ghz 12″. Several other improvements in the Powerbook lead me to the upgrade. The old one is already sold and off to its new owner. One of the guys in my group upstate bought it. So Chris gets a near perfect condition “pre-owned” mac, and I get a new one for virtually nothing!

So far I love the new machine. I am slowly moving everything over to it and using it as my primary machine! As the title says, this is not some major upgrade, but it is a really nice incremental upgrade to an already fantastic piece of hardware!