ZFS Cheatsheet
This is a quick and dirty cheatsheet on Sun's ZFS
Directories and Files | |
error messages | /var/adm/messages console |
States | |
DEGRADED | One or more top-level devices is in the degraded state because they have become offline. Sufficient replicas exist to keep functioning |
FAULTED | One or more top-level devices is in the faulted state because they have become offline. Insufficient replicas exist to keep functioning |
OFFLINE | The device was explicity taken offline by the "zpool offline" command |
ONLINE | The device is online and functioning |
REMOVED | The device was physically removed while the system was running |
UNAVAIL | The device could not be opened |
Scrubbing and Resilvering | |
Scrubbing | Examines all data to discover hardware faults or disk failures, only one scrub may be running at one time, you can manually scrub. |
Resilvering | is the same concept as rebuilding or resyncing data on to new disks into an array, the smart thing resilvering does is it does not rebuild the whole disk only the data that is required (the data blocks not the free blocks) thus reducing the time to resync a disk. Resilvering is automatic when you replace disks, etc. If a scrub is already running it is suspended until the resilvering has finished and then the scrubbing will continue. |
ZFS Devices | |
Disk | A physical disk drive |
File | The absolute path of pre-allocated files/images |
Mirror | Standard raid-1 mirror |
Raidz1/2/3 | ## non-standard distributed parity-based software raid levels, one common problem called "write-hole" is elimiated because raidz in ## zfs the data and stripe are written simultanously, basically is a power failure occurs in the middle of a write then you have the ## data plus the parity or you dont, also ZFS supports self-healing if it cannot read a bad block it will reconstruct it using the ## You should keep the raidz array at a low power of two plus partity ## raidz is more like raid3 than raid5 but does use parity to protect from disk failures |
spare | hard drives marked as "hot spare" for ZFS raid, by default hot spares are not used in a disk failure you must turn on the "autoreplace" feature. |
cache | Linux caching mechanism use what is known as least recently used (LRU) algorithms, basically first in first out (FIFO) blocks are moved in and out of cache. Where ZFS cache is different it caches both least recently used block (LRU) requests and least frequent used (LFU) block requests, the cache device uses level 2 adaptive read cache (L2ARC). |
log | There are two terminologies here
|
Storage Pools | |
displaying | zpool list # zdb can view the inner workings of ZFS (zdb has a number of options) |
status | zpool status ## Show only errored pools with more verbosity zpool status -xv |
statistics | zpool iostat -v 5 5 Note: use this command like you would iostat |
history | zpool history -il Note: once a pool has been removed the history is gone |
creating | ## You cannot shrink a pool only grow it ## you can also create raid pools (raidz/raidz1 - mirror, raidz2 - single parity, raidz3 double partity) |
destroying | zpool destroy /zfs1/data01 ## in the event of a disaster you can re-import a destroyed pool zpool import -f -D -d /zfs1 data031 |
adding | zpool add data01 c2t0d0 Note: make sure that you get this right as zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below |
Resizing | ## When replacing a disk with a larger one you must enable the "autoexpand" feature to allow you to use the extended space, you must do this before replacing the first disk |
removing | zpool remove data01 c2t0d0 Note: zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below |
clearing faults | zpool clear data01 ## Clearing a specific disk fault zpool clear data01 c2t0d0 |
attaching (mirror) | ## c2t0d0 is an existing disk that is not mirrored, by attaching c3t0d0 both disks will become a mirror pair zpool attach data01 c2t0d0 c3t0d0 |
detaching (mirror) | zpool detach data01 c2t0d0 Note: see above notes is attaching |
onlining | zpool online data01 c2t0d0 |
offlining | zpool offline data01 c2t0d0 |
Replacing | ## replacing like for like zpool replace data03 c2t0d0 ## replacing with another disk zpool replace data03 c2t0d0 c3t0d0 |
scrubbing | zpool scrub data01 Note; see top of table for more information about resilvering and scrubbing |
exporting | zpool export data01 ## you can list exported pools using the import command |
importing | ## when using standard disk devices i.e c2t0d0 ## importing a destroyed pool |
getting parameters | zpool get all data01 Note: the source column denotes if the value has been change from it default value, a dash in this column means it is a read-only value |
setting parameters | zpool set autoreplace=on data01 Note: use the command "zpool get all <pool>" to obtain list of current setting |
upgrade | ## List upgrade paths zpool upgrade -v ## upgrade all pools zpool upgrade -a ## upgrade specific pool, use "zpool get all <pool>" to obtain version number of a pool zpool upgrade data01 ## upgrade to a specific version zpool upgrade -V 10 data01 |
Filesystem | |
displaying | zfs list zfs list -t all -r <zpool> |
creating | ## persuming i have a pool called data01 create a /data01/apache filesystem Note: don't use a zfs volume as a dump device it is not supported |
destroying | zfs destroy data01/oracle ## using the recusive options -r = all children, -R = all dependants zfs destroy -r data01/oracle zfs destroy -R data01/oracle |
mounting | zfs mount data01 # you can create temporary mount that expires after unmounting |
unmounting | zfs umount data01 |
share | zfs share data01 |
unshare | zfs unshare data01 ## persist over reboots zfs set sharenfs=off data01 |
snapshotting | ## snapshotting is like taking a picture, delta changes are recorded to the snapshot when the original file system changes, to |
rollback | ## by default you can only rollback to the lastest snapshot, to rollback to older one you must delete all newer snapshots zfs rollback data01@10022010 |
cloning/promoting | ## clones are writeable filesystems that was upgraded from a snapshot, a dependency will remain on the snapshot as long as the ## clones cannot be created across zpools, you need to use send/receive see below topics ## cloning |
renaming | ## the dataset must be kept within the same pool Note: you have two options |
Compression | ## You enable compression by seeting a feature, compressions are on, off, lzjb, gzip, gzip[1-9] ans zle, not that it only start ## you can get the compression ratio |
Deduplication | ## you can save disk space using deduplication which can be on file, block or byte, for example using file each file is hashed with a ## So how much RAM do you need, you can use the zdb command to check, take the "bp count", it takes about 320 bytes of ram ## to see the histrogram of howm many blocks are referenced how many time |
getting parameters | ## List all the properties Note: the source column denotes if the value has been change from it default value, a dash in this column means it is a read-only value |
setting parameters | ## set and unset a quota Note: use the command "zfs get all <dataset> " to obtain list of current settings |
inherit | ## set back to the default value zfs inherit compression data03/oracle |
upgrade | ## List the upgrade paths ## List all the datasets that are not at the current level |
send/receive | ## here is a complete example of a send and receive with incremental update ## create some test files ## create the data filesystem ## set the slave to read-only because you can cause data corruption, make sure if do this before accessing anything the --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ## using SSH ## using a tape drive, you can also use cpio ## Using gzip to compress the snapshot |
allow/unallow | ## display the permissions set and any user permissions ## delete a permission set |
Quota/Reservation | ## Not strickly a command but wanted to discuss here, you can apply a quota to a dataset, you can reduce this quota only if the ## Newer versions of Solaris allow you to set group and user quota's
## set a quota ## setup user quota (use groupquota for groups) ## remove a user quota (use groupquota for groups) ## List user quota (use groupspace for groups), you can alsolist users with quota's for exampe root user |
ZFS tasks | |
Replace failed disk | # List the zpools and identify the failed disk # clear any existing errors # you can now remove the failed disk in the normal way depending on your hardware |
Expand a pools capacity | # you cannot remove a disk from a pool but you can replace it with a larger disk zpool replace data01 c1t0d0 c2t0d0 zpool set autoexpand=on data01 |
Install the boot block | # the command depends if you are using a sparc or a x86 system sparc - installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0 x86 - installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 |
Lost root password | # You have two options to recover the root password ## Option two |
Primary mirror disk in root is unavailable or fails | # boot the secondary mirror # if the replace above fails the detach and reattach the primary mirror |
Resize swap area (and dump areas) | # You can resize the swap if it is not being used, first record the size and if it is being used Note: if you cannot delete the original swap area due to being too busy then simple add another swap area, the same procedure is used for dump areas but using the "dumpadm" command |
'IT > Unix' 카테고리의 다른 글
Duplex config (0) | 2016.08.08 |
---|---|
Linux Logical Volume Manager (LVM) (0) | 2016.08.08 |
HP Logical Volume Manager (LVM) (0) | 2016.08.08 |
AIX Logical Volume Manager (LVM) (0) | 2016.08.08 |
AIX 보안 설정 및 체크리스트 (0) | 2016.08.08 |