Friday, July 12, 2013

EMC acquires ScaleIO, but why?

EMC's ScaleIO Acquisition

I have been thinking about EMC's acquisition of ScaleIO since it was publicly announced yesterday and thought I would share them with you fine Internet readers.

What is ScaleIO

Before we start to talk about why EMC would buy this company, lets talk about what they do. I have a conversation with them about 2 months ago when I first heard about them. The long and short of it is that they take the storage that is available on machines and make it available as a distributed storage pool. The selling point here is that it is a software only solution. You bring your own storage and with the software you can have a SAN without all the expensive hardware from a large vendor like EMC or HP. This would in turn lead to lower costs and this is what they focus on in their marketing. 

The software can be used on enterprise linux distributions and VMware ESXi. The ESXi piece is what got my attention and which is why I gave them a call to talk about the solution. I was thinking about using this in a VDI install. The idea would be get some powerful standalone machines with a decent amount of local spinning disks and SSDs for caching. With this, I would have an island of VDI compute and storage that could scale out both storage and compute as needed. This could keep costs down and still be flexible when needed. Seemed like an interesting idea. At first that is...

But what about vSAN?

Then I remembered that VMware has a product in waiting that solves this problem in a more native way. It is called vSAN. The ideas behind vSAN and ScaleIO are the same. Local disk in each ESXi host that is available as a storage pool for all ESXi hosts to use. The difference is that it would be native to ESXi and therefore more easily managed by vCenter and therefore more easily usable by vCloud ( hopefully ).

So why?

So why would EMC, the semi-parent company of VMware, by ScaleIO? At first, I guess they could sell it as software SAN solution to fill in the gaps in the lower end of their portfolio but I don't think this makes sense. That makes more sense for Dell than EMC. They sell compute and storage already.

This solution could be used with something like XtremeSF to make all the cards in all the ESXi hosts available as a distributed flash SAN. Then with XtremeSW, you could use the capacity on all XtremeSF cards in the cluster for caching. But then again, this is what vSAN could provide as well maybe without all the caching at first. Seems like it is competing with itself.

Then I realized how this might be used. Remove ESXi from the previous scenario. If you don't do this within a VMware environment, then ScaleIO starts to makes sense. You still need a distributed filesystem or a way to arbitrate access to the distributed block device. The is quite a curios scenario since EMC, along with VMware, is pushing customers to use ESX and the vSphere suite for all apps. Maybe there is a large enough market out there of people not virtualizing all apps where this might get EMC a foot in the door.

Another usage scenario where this might be a good fit is HPC. The HPC folks tend to use commodity hardware and distributed filesystems for storage in the first place. With ScaleIO, they get a nice solution and it also gives EMC a way to get in the door with HPC shops.

Lastly, maybe EMC acquired this product to remove it from the general market. It is a competing product for what EMC offers in hardware today and also for what VMware is going to offer very soon in software.

What do you think? Leave some comments below.

Tuesday, August 3, 2010

Recover Master Password in MIT Kerberos

So I am in the process of updating the KDC at my job ( Open Systems @ University of Florida ) and we ran into a few issues:
  1. RHEL does not have a new enough version of MIT Kerberos
  2. No one can remember the "Master Password" for the KDC
  3. The KDC is running on older Power hardware under AIX
It took a while but it looks like we finally figured out how to get from the old AIX/Power box to RHEL/x86_64.

The following is how we solved each one of the problems from above.

RHEL has an older version of MIT Kerberos

As was mentioned in my previous posts: I hate you RHEL and Kerberos 1.8 on RHEL. I was able to get MIT Kerberos 1.8 compiled, packaged, and install on RHEL5.5. See those posts for my work arounds.

Anyone remember the KDC Master Password?

The KDC was installed back in 1996. At the time, the password was known. The stash file was also created so that the KDC could start automatically. There were about 4 people that knew the Master Password either because they entered it in or they had access to the piece of paper that had the password. The person that created the Master Password no longer worked at UF, so when I needed to know what the Master Password was I had to find someone that had access to the piece of paper. After some searching, It was determined that the piece of paper was long lost and assumed destroyed. So no one knew the Master Password. All we had was the stash file.

We thought, well that sucks but at least we have the stash file so we can move forward. WRONG! Turns out the stash file is not endian safe. The file itself is written in the native endianess of the machine. Since we are going from a Power to and x86_64 machine, taking a KDC dump and taking the stash file will not work.

After much research by one of my co-workers we found an option to kdb5_util dump command that allows you to re-key the principals based on a new Master Password. That option is -mkey_convert.

There is one more wrinkle to this story. The new version of MIT Kerberos allows you to have multiple encyption types on your K/M principal. The default encryption type has also changed. So you will not just need to dump with a new Master Password but you will also need to know the encryption type of your K/M principal. This can be achieved by executing a getprinc on K/M and noting the encryption type.

So now that we had all the tools in place here is the procedure:
  1. Dump the KDC database on your primary kdc with the following: kdb5_util dump -mkey_convert kdc.dump. This will ask you for a new Master Password. Set it to the actual Master Password that you want in your new KDC
  2. Copy the contents over to the new KDC.
  3. Figure out what the encryption type of the K/M principal is: kadmin.local -q "getprinc K/M" and not the encryption type. You will need it later.
  4. Create a new KDC: kdb5_util -r create. The password is not important.
  5. Add a new Master Key encryption type by doing the following: kdb5_util add_mkey -e where encryption type is set to the encryption type used where you took the KDC dump.
  6. Get a list of the Master Keys: kdb5_util list_mkeys. Note the kvno of the newly created key.
  7. Switch to using the newly created key: kdb5_util use_mkey where kvno is the kvno of the key with the correct encryption type.
  8. Create a stash file: kdb5_util stash
  9. Load your dump: kdb5_util -r load kdc.dump
At this point, you have a KDC with a password that you should know that you set in step 1 above. You also have a stash file with two entries.
  • The Master Password that you entered in step 4
  • The Master Password that you set in step 1
The second is more important. If you wanted to, you could create a new stash file which will remove the first entry.

There you have it. If you follow these directions you should be able to:
  • Set a new Master Password on your KDC
  • Change your KDC CPU architecture from one to another
You just need to have the following things:
  • Have a working stash file for your current KDC
  • Know the encryption type of your K/M key

Kerberos 1.8 on RHEL

Turned out the dependency hell was easier to get out of this time. I needed to remove both pam_krb5 and krb5-libs.i386. Weird thing is you can leave the krb5-libs.x86_64 installed and the update will work just fine. Probably has to do with the fact that I did not build i386 versions of krb5-libs-1.8 but only x86_64 version.

Again, Thanks RHEL. That sucked but at least I can move on to the next step.

Tuesday, July 27, 2010

I hate you RHEL

So I am now tasked with upgrading the Kerberos Key Distribution Center ( KDC ) from an AIX box to a 64bit RHEL5 box. Most of the hard work with respect to the steps needed to be taken for the transition had already been taken by a co-worker of mine. Unfortunately a wrinkle was thrown into the mess. That wrinkle is a version change.

The current KDC is at version 1.6.x. The new KDC we want to go to is version 1.8.x. The reason for the upgrade and transition is new functionality that is only available in the 1.8.x branch which is now considered an enterprise requirement.

No biggie, I thought to myself. I'll just go and get a newer version from RedHat. Nope. RHEL5 is stuck at 1.6.x.

Ok. I guess I'll build a newer package like I have done for other things that I have done with RHEL recently. So I started to build a 1.8.x version and since I do not like reinventing the wheel, I used Fedora Core Rawhide's krb5 source rpm as a starting point.

After ripping out the Fedora-isms, I got the rpm to build. When I tried to install it on RHEL5.5 box it failed dependency checks. Looks like pam_krb5 depends on krb5-libs. Makes sense. Hmm, pam_krb5 depends on the krb4 parts of krb5-libs. That could be a problem.

Every since version 1.7 of MIT Kerberos, krb4 compatibility has been removed. Thats actually a really good thing since:
  1. I don't have a need for any parts of krb4
  2. MIT has an easier time of maintaining the kerberos distribution
So pam_krb5 is built with krb4 compatibility or at least the RPM depends on the libkrb4 libraries.

At this point I got really frustrated ( read: angry ). I really don't want to go down the RPM dependency graph to much further than a single package.

I think my solution to this problem is going to just remove my need for pam_krb5 from the KDC and install my custom package.

The other alternative is to install an RHEL6 beta on the KDC since it is deployed with at least a 1.7 krb5-libs package. That makes me feel really icky. Deploying a beta distribution for such a critical part of the enterprise makes me feel kinda sick.

Thursday, May 6, 2010

EMC World

I'll be heading to EMC world in Boston this Saturday through Thursday. Should be a good week. I've scheduled myself for a bunch of classes about the Celerra and lots of stuff about VMware. Most of it is best practices and performance tuning.

There are some things that I am curious about. One in particular that I have been thinking about is Virtual Computing Environment ( VCE ) Vblock architecture. If I am reading the glossies correctly, this is basically a joint venture between VMware, Cisco, and EMC. They seem to want to take a building block approach to getting people to virtualize their workloads.

The blocks consists of:
  • VMware Hypervisor and other software
  • Cisco switches in both hardware and software form ( MDS and Nexus 1000v )
  • Cisco compute in the form of UCE
  • EMC storage as either a Clariion or Symetrix
  • Some software glue
It is an interesting concept but I think there are a few flaws to this approach

Interoperability

One of the tenets of VCE is that it makes deployment of these products simpler due to rigorous testing.

Lets looks at all these products. Each one is already certified to work with the other. The "testing" they claim has already been done. All these vendors want/need to make sure that their products work with each other since their customers demand it already. The only added functionality that VCE brings to the block is this software glue that supposedly makes configuring and maintaining this environment simpler. I'll have more to say about that after I see a VCE demo at EMC world.

Support

You can go to one place for support of this entire complex. I have to question this due to my personal experience. When it comes to support of gear from one vendor from another vendor, it is weak at best. Case in point, Cisco MDS 95xx series switch purchased through EMC. When we need support for it, we need to call EMC. The feeling we get from support makes us feel that EMC then basically opens a support case with Cisco. They seem to play the middle man. I wonder how this will work when you have a blade in your UCE that needs to be replaced.

Flexibility

The idea behind these building blocks is that they make it easy for you to manage and maintain your environment. If that was the case why are they only working with VMware, EMC, and Cisco gear? It seems to me the most important part of this whole product is the glue that lets you tie in the management of all this gear in a simple way. I don't see why this must be done with this gear specifically. The hardware and software provided by these vendors has open and public APIs for management. Can the glue software not be abstracted to the point where it would not matter what hardware is behind the scenes?

Hopefully I will get a better idea of why EMC is pushing this approach during EMC world.

Friday, November 14, 2008

The future of Netbooks and MIDs

So I read on a bunch of places yesterday that ARM is getting into the netbook and MID market. Not only that but also that Ubuntu is going to have a customized ARM distribution for this market. Looks like Intel will not just have competition from AMD and Via in this market but also from the long standing king of embedded systems. Maybe this should be the other way around. Maybe this is ARM reacting to Intel and friends from entering its usual domain. Either way, I think this is great. The current crop of netbooks and MIDs seem under powered for the amount of useful time they provide due to the amount of juice they suck up. If ARM is to be believed, then the newer Cortex-A series of processors have the equivalent horsepower of a Pentium III with much, much less energy usage and heat dissipation. These procs also have fine DSPs embedded that are capable of decoding h.264 content without much ado.


So why would ARM ask Ubuntu to make a custom distro for their platform?Well, if you have ever used Ubuntu you will now how dead simple it is to use. Couple this with the recent reports of returns of linux based netbooks to retailers for the Windows XP version due to complexity. ARM may be onto something here. Ubuntu is really designed for the desktop and to some extent the Windows user. It is really easy to maintain and "most" things can be configured with a nice simple, almost Apple like GUI.


Another nice thing for ARM to partner with Ubuntu is that they have a large userbase that will test their builds for them and give somewhat useful feedback. And on top of that, the Ubuntu folks will be able to optimize not just the GUI that the user sees but also the total system performance since their users will ask for it.


This is really a win/win situation for all. ARM will get a well tested and stable desktop operating system that most users will understand and hopefully enjoy using. Ubuntu will get even more press than they are getting now and spread the word of linux into more peoples homes.


What do I hope to get from this partnership? I am really hoping that ARM will help the ffmpeg/mplayer, Xorg, and kernel guys get access to the powerful DSPs and video processor on these chips. I would love to see a 7" screen with WXGA (1366x768) or WSXGA (1440x900) resolution. This would be optimal for both browsing the web and watching videos. With the great power management of ARM chips, these things might actually be able to last a whole flight from say NY to Tel Aviv. The price of these things might also be very competitive. Much cheaper than the initial netbook and MIDs that ranged from $400-$500. I am hoping for that original ASUS Eeepc pricepoint of $200. If they can hit that pricepoint then I am there.


I have a few thoughts about ARM and virtualization tech, but I am going to let that stew in the brain for a bit before I right down my thoughts.