Environment
Data ONTAP 7.2.4 (Simulator)
Solaris 10 Update 6 (X86)
Snapshot for LUN
On NetApp Data ONTAP, LUNs are created from volume. Snapshot is taken at the volume level, rather than LUN level. If a snapshot is taken for a volume, all the LUNs residing on the volume will have a snapshot. Therefore, the restoration of a volume snapshot will revert all the LUNs of that volume to the status before the snapshot was taken.
For example, if both lun1 and lun2 are created from vol1, the snapshot for vol1 will capture data on both lun1 and lun2. If vol1 is restored from snapshot using command “snap restore –s <snapshot name> <volume name>”, then data on both lun1 and lun2 will be reverted to its original state.
If you only want to restore data on a single LUN, you can create a clone LUN from the snapshot. This clone LUN can then be mapped to the server for restoration. LUN clone from snapshot for a single LUN will not change data on other LUNs, even if they are in the same volume.
For example, lun1 and lun2 are created form vol1. A snapshot vol1-snap for vol1 is taken. After that you want to restore data on lun1 from the vol1-snap. You can follow steps below to recover data on lun1 by creating a clone LUN
1. Create a clone LUN using “lun clone create /vol/vol1/lun1-orig -b /vol/vol1/lun1 vol1-snap”.
2. Bring clone LUN online if it is offline, then pap the clone LUN to initiator group using “lun map /vol/vol1/lun1-orig ig1”.
3. Discover the new LUN on the server.
4. Mount the file system and start restoration.
5. Un-mount file system and remove clone LUN on the array.
Note that after clone LUN is mounted on the server, you don’t need to create new file system on the LUN. In addition, the file system on the LUN is read-writable.
Snapshot for NFS
On NetApp Data ONTAP, NFS is created from Qtree which resides on a volume. Qtree can be regarded as a sub-directory in a volume, this directory can be exported as NFS to the server. You can’t specify the size of a Qtree since it is a directory. But you can specify the quota of a Qtree which will be shown as the size of NFS if checked on the server.
The procedure for creating a NFS resource is as below.
1. Create a Qtree qtree1 on the array using “qtree create /vol/vol1/qtree1”
2. On the array, check the contents of file /etc/exports using “rdfile /etc/exports”
3. Note the line for /vol/vol0, change the management IP address to the IP address of the server which you want to use for disk array management. In this example, the IP address of the management server is 192.168.56.12.
/vol/vol0 -sec=sys,ro,rw=192.168.56.12,root=192.168.56.12,nosuid
4. Mount /vol/vol0 on server with IP address 192.168.56.12
5. On the management server, open file <mount point>/etc/exports, add lines for Qtree qtree1. Server 192.168.56.12 is also used as NFS client in this example.
/vol/vol1/qtree1 -sec=sys,rw,root=192.168.56.12
6. On the disk array, export the Qtree as NFS using command “exportfs /vol/vol1/qtree1”
7. On the disk array, check quota status for vol1 using command “quota status vol1”. If quota for vol1 is off, then enable quota for vol1 using “quota on vol1”. (You may be told that /etc/quotas does not exist, then you need to manually touch file /etc/quotas).
8. On the management server, open file <mount point>/etc/quotas and add line below in the file to specify NFS size is 400M
/vol/vol1/qtree1 tree 400M
9. On the disk array, resize quota for vol1 using command “quota resize vol1”.
10. Mount NFS resource <IP address>:/vol/vol1/qtree1 and you can see that the NFS size is 400M. (Note: repeat step 8, 9, 10 if you want to resize NFS size)
You can then make some modification on the NFS and make a snapshot for vol1. After the snapshot is taken, a directory called .snapshot will be created in the Qtree. Users on NFS client can access this directory and use the data for restoration.
Note that if a volume snapshot is taken, both LUNs and Qtrees in the volume will have a snapshot.