FAQ sobre Shared Storage Pools en VIO Server 2.2

Pocket

A continuación os presentamos una serie preguntas realizadas en un “Webinar” del AIX VUG (Virtual Users Group) sobre Shared Storage Pools (SSP) que dió Nigel Griffiths allá por Diciembre de 2011. Lo compartimos en el blog pues da respuesta a muchas de las preguntas que nos surgen acerca de los nuevos Shared Storage Pools.

Q: IBM used to say that NPIV gives you better performance than VSCSI. Has this changed? If not, is SSP recommended even on heavy I/O solutions?

A: I remember hearing a performance improvement due to the VIOS being more of a passthru. However, Nigel made a comment up front that he hasn’t seen a noticable performance difference. I have never seen actual measurements.The down side of NPIV is a lot more work setting it up and maintenance.We have no performance stats on SSP yet – I would guess it is about the same as vSCSI.Given it is only just out, I think we need to build experience and then benchmark it.

Q: So does that mean that we can have a storage pool only between two Power system frames?
A: You can have a Shared Storage Pool (SSP) across any four VIO Servers – these could be in the same server or spread across physical Power Systems servers. There is no frame nor rack limitation!Q: Is there a limitation on the number of physical servers or vio server being part of a cluster for a single storage pool?
A: Four Virtual I/O Servers (VIOS) – which could be in four different physical servers.

Q: Can we expand SP size dynamically?
A: Why would you do that?

  1. Given thin provisioning, you should think big so you don’t run out in the first place
  2. Adding new space is one command (then cfgmgr on AIX and add it to a VG) and resizing would mean lots of commands to get the OS in the VM to realize a change was made.
  3. I have not seen a command to make a Logical Unit (virtual disk) larger 

Q: are the vSCSI adapters actual HBAs??
A: Virtual SCSI adapters are sort of virtual HBA!

Q: Is EMC Power Path supported? For example, can you add an hdiskpower disk to the cluster or do hdisks have to be specified?
A: All disks that are currently supported with the VIOS are supported with SSP.The SSP commends just refer to the “hdisks” or equivalent

Q: What prevents one from selecting an existing SAN LUN that is already in use. For example if you were creating another storage pool.
A: The VIOS puts special magic details on the disk so it knows if it is in use by AMS and SSP. This is why you have prepdisk and cleandisk commands. Also chkdev can give you information. The lscluster command will give you list of disks that SSP is using and the hdisk names – including the hdisk name on all the other VIO Servers – it is very simple to see if SSP is using a disk.

Q: Are ITM PX or VIOS agents cluster aware and capable of monitoring SPs?
A: I don’t know – I guess we have to wait for their next update i.e. after SSP2 is available. The client Virtual Machines, just see the normal vSCSI disks – no change there. It is only the VIOS that knows about SSP

.Q: Nigel indicated that POWER7 blades would be supported for SSPs. Do they need to be SDMC managed? Or is IVM sufficient?
A: I have not tried this on a blade but they are regular VIOS commands and work regardless of a HMC or not so I don’t see a need for SDMC or IVM. To get the HMC panels you need HMC 7.7.4. SDMC is not at the equivalent level until its next update currently scheduled for Q2 2012. The IVM is meant to be functionally stable – so I am not sure if it will be updated to have a GUI for SSP. As you can do everything with simple commands it is not critical IMHO.

Q: I’m not aware that there are any statements regarding NPIV vs. VSCSI performance. Both have similar CPU overhead and additional latency for IOs.
A: thank you, Dan. (from Dan Braden)

Q: Does reversion to an earlier snapshot affect how storage is allocated with thin provisioning
A: If you snapshot then use more disk space and then go back to the snapshot the extra disk space used should be removed along with the disk blocks that had to be copied to allow updates to the original set.

Q: Is this recommended and released for 2TB SAP system on db2 9.7 hadr?
A: You should check with SAP & DB2 for official support statements. SSP2 is a fully supported product and you will get support for any issues from the VIOS team. This is not a beta test!! The sanity of immediately committing to a large, complex production critical workload to new features is up to you. Personally, I would do some testing first.Q: Is this recommended to use with Netapps disks or more oriented to DS8k disks?
A: SSP2 needs any currently supported LUNs and not of any particular type of backend subsystem

.Q: I don’t understand why you say for NPIV it is necessary to pre-zone the LUNs. The WWNs of the adapters do not change (apart from primary to seondary). Hence, Live Partition Mobility (LPM) does not require additional zoning.
A: I think he is talking about the two sets of WWNs where the second set is zoned to the target machine that LPM will go to. Some data centers don’t like to have those zoned ready for any future LPM or if they zone them on demand (about to LPM) take a long time to get them set up

.Q: Is reclamation supported? Meaning a block that was previous used but no longer used because the file containing that block was deleted. Will that unused block be reclaimed by the pool for future use?
A: No and no. The problem (as with other thin provisioning technologies) is signaling that a block is unused from the OS that does not know it is thin provisioned. It is on the list of cool features for a future release – this is not a commitment.

Q: Have you measured how much of an IO delay a copy on write operation introduces?
A: Nope – that would be really hard to detect. I guess I could hack a copy of ndisk with 64KB writes then 1 in 1024 writes should take a little longer. What use would the information be? Perhaps, the iostat maximum I/O time would reflect the 1 in a 1000.

Q: It looks like there’s only one cluster repository disk. Is that a single point of failure?
A: Correct, as SSP relies on the Cluster Aware AIX (CAA) technology it has inherited this. It is a high priority item for CAA to address. You could use SVC to duplicate the repository and you can recover the repository using the viosbr command (provided you took a backup). I have not tried this yet

Q: You said the “cluster -create” started the agents on the VIOS…….if you don’t run cluster -create on all nodes, what starts the agents on other vio servers?
A: You run cluster –create on the first VIOS, then on that VIOS I ran cluster –addnode

Q: do you run cluster -create on all nodes?
A: No just once – this creates the repository.Q: The example shows dual path to a single VIO, can you use dual VIO?
A: I think there were diagrams, like for the pre-demonstration slide, which showed Dual VIOS to a single client VM. It does work as I tried it. See the slide on setting it up on the second VIOS with mkbdsp – you must not use the size option (or you create another LU).

Q: So clusters are required for storage pools? Not sure i got a yes or no answer on that?

A: “Cluster” is a very ambiguous term! SSP using Cluster Aware AIX (CAA) to make a cluster of VIOS with access to the same disk pool. If you only have one VIOS in the SSP so you get the thin provisioning and snap shots features then it is debatable if this is really a “cluster” – can you have a cluster of one node?

Q: With all the new storage vendors moving to their own pools, thin provisioning, etc. is this recommended on the VIOs?
A: Yes, newer disk subsystems can do thin provisioning etc at some additional cost but there tend to be in the hands of the SAN team and using odd SAN tools. SSP raises these features to simple VIOS commands, so once setup no further SAN work needs to be performed plus LPM is fully available.

Q: I don’t see Virt Storage mngr on my hmc? is there a min version?
A: Gosh you must be using truly ancient versions – IMHO you should get to HMC 7.7.3 and VIOS 2.1 ASAP. I don’t have the time to dig up historical facts – you could ask Support. SSP is for POWER6 and POWER7 – I suspect you might have POWER4 or 5!

Q: What will it take to convert NPIV to vSCSI ?
A: Assuming that by vSCSI you mean moving to SSP over vSCSI.A complete backup. Well you could if disk space allows: bring the SSP disk space online, add the disks to the VG’s. Then migratepv from NPIV disks the SSP disks. A quick bosboot and bootlist. Remove the NPIV from the VG and delete them, then clean off NPIV from the LPAR profile. Oh and then disable all that MPIO config you had to configure for NPIV.

Q: How do I get to know that my system is connected to DS from aix console ?
A: I am a little unclear about the question! Assuming DS is your IBM DSxxxx disks or short for Data Storage. From AIX on the client VM, you can’t tell if the disk are a Logical Volume on the VIOS, a whole disk on the VIOS, a LUN disk on the VIOS, file backed disk space from the VIOS or from a Shared Storage Pool. This is a very good thing as it means zero changes to the OS.

Q: So we need to run the allocate on ALL VIOs in the cluster or just in the case of two VIOs on the same frame?
A: You allocate the disk space to a VM (LPAR) on one of the VIOS nodes and all the others will auto magically know about it.

Q: As a method of migration can I mirror a LUN to a SSP LUN and still reap rewards of thin provisioning on the SSP LUN? (Will ‘mirrorvg’ write to the whole VG as it mirrors?)
A: Logical Volume re-silvering means writing to all the disk blocks, so no thin provisioning. If you have unused space in the Volume Group it will be thin. To be precise AIX does not do LUN mirroring – only LV mirroring.

Q: How do blocks get ‘un-provisioned’…write nulls to it?
A: Disk blocks are not freed in this version. See a similar question above for more comments.

Q: Will this presentation be available for download?
A: Presentation is on the wiki: https://www.ibm.com/developerworks/wikis/display/WikiPtype/AIX+Virtual+User+Group+-+USA

Q: Is LPM only supported between the 4 VIO servers in the pool since the client disk presentation is unique within the cluster?
A: Yes.

Q: There are thin provisioning features. Any intentions for deduplication features for storage pools?
A: No in this release. I am sure that is on the possible list of features for the future but this is comment and not a commitment.

Q: Can the network connection be on the same adapter as the VIOS SEA connection? Should it be a dedicated adapter?
A: The VIO Servers have to have dedicated adapters over which the Sea is created as normal. The client VM can have virtual network adapters using the VIOS SEA or dedicated network adapters. Virtual adapters are required for Live Partition Mobility – as normal.Q: did the vhost in the mkbdsp have to be created first?
A: Yes, you create virtual SCSI adapter pairs for the VIOS and client VM to communicate – as normal

Q: Where are the cluster commands run from…one of the VIOS…?
A: Yes. On the VIOS

Q: Does it matter which one?
A: You create the cluster –create on one VIOS and I ran the cluster –addnode on the same VIOS. I have not tied other permutations.If you want to create/remove disk space for a particular client VM then you need to run the command on the VIOS that is connected to it – so the vhost adapter is local to the comment. Most of the other commands are node independent.

Q: is that a recommended configuration : 40 VIO client per VIO server?
A: That is the maximum supported.

Enlace a una guía para la configuración de SPPs -> https://www.ibm.com/developerworks/wikis/display/WikiPtype/Shared+Storage+Pools

Y algunos de los requisitos mínimos para hacer uso de esta tecnología:

* Power6 & Power7
* PowerVM standar or enterprise edition versión 2.2 o posterior
* VIO 2.2.0.11 Fix pack 24 sp 1 or later
* 4GB RAM
* AIX 5.3 o posterior en últimos TLs