Filename generation in shells
- ?
- Any single character
- [abc]
- One character from abc
- *
- Zero of more characters
Things to do on filesystems
- exportfs -a
- Export all filessytems from /etc/exports, on changes first execute
exportfs -ua (all connections are lost)
Solaris uses ‘share’, see below. - bdf
- Free disk space on all file-systems (HP)
- df -k
- Free disk space (on other unixes) in kilobytes
- df -i
- Free and used inodes
- ls -i
- Return inode of file
- clri /dev/vgxx/<lvol> <inode>
- HP-UX specific. Remove inode from device, always perform fsck after this.
- find . -name ‘*bla’ -mtime +90 -exec rm {} ;
- Find all file below the current directory who’s name end with bla,
who’w last modify time is more than 90 days in the past and remove the
found files. You can select on many more things an execute any command,
see manpages’. - fsck /dev/rdsk/<spec>
- Check filesystem on disk (use Character-device (rdsk))
- diskinfo <devicefile>
- HP specific. Read from disk, nice check if disk is broken.
- fuser <mountpoint>
- Show user of <mountpoint>
- tunefs
- On the fly adjust filesystem parameters, files on disk are not
changed. - scsictl
- Adjust parameters scsi controller (e.g. turn on immediate
reporting) - newfs <dev-file>
- Create new filesystem
-f Fragment size
-b Block size
-m Min. free space% - mklost+found
- Create lost+found directory with a lot of free slots, here changes
can be made without changing inode organisation. - mount -F <fstype> -o <options> <devicefile> <mountpoint>
- mount filesystem from the devicefile on the mountpoint.
-F hsfs and -o ro may be needed to mount CDROMS - /usr/sbin/lofiadm -a <path_to_iso_image_file> /dev/lofi/1
mount -F hsfs -o ro /dev/lofi/1 /mnt/tmp - Mount ISO files at /mnt/tmp
- umount /mnt/tmp
lofiadm -d /dev/lofi/1 - Release the .iso again
- mkisofs -J -L -a -r -V <CDNAME> -o <image_file_name>
<path1> [path2] … - Create Joliet and Rockridge (Windows and Unix support) ISO-image with
volume label from the directory’s specified in <pathx>
HP
Logical Volume Management
- vgchange -a n /dev/vgxx/<lvol>
- Deactivate <lvol>
- vgchange -a y /dev/vgxx/<lvol>
- Activate <lvol>
- vgcfgrestore -n /dev/vgxx /dev/rdsk/<dev-file>
- Restore disk in volume-group.
- pvcreate
- Create physical volume (HP LVM), createst PVRA en VGRA (Physical
Volume Reserved Area en Volume Group Reserved Area). vgcreate will fill
these area’s . lvcreate writes in VGRA too.
SUN specific
Disksuite is the old SUN disk-mirroring software. Nowadays veritas volume manager is often used. Disksuite commands are in /usr/lib/SUNWmd/bin/, configuration is in /etc/opt/SUNWmd/
Mirrors are presented like; D10 with submirrors D11 and D12
- metatstat -p
- generate /etc/opt/SUNWmd/md.tab
- metatstat [Mirror]
- Info on mirrors look for Needs Maintenance
- metadetach [-f] <Mirror> <subMirror>
- Detach submirror (-f = force)
- metaclear <subMirror>
- Clear slice
- metainit <SubMirror>
- Init slice
- metattach <Mirror> <subMirror>
- Attach a subMirror
- format
- Program to format and check disks, use to check if disk is still
operatable. you can define slices (partitions) on an operational
disk. - iostat -E
- Report disk hardware and vendor information
- prtvtoc
- Report disk layout and partition information
- fstyp -v
- Show filesystem characteristics
- fsdb
- File system debugger
- shareall
- Like exportfs, all filesystems in /etc/dfs/dfstab are exported.
- sccli
- Interface to StorEdge arrays. Another way to access those is telnet to the StorEdge IP address.
If there is a password in the array it can be reset using sccli.- sccli> show events
- Show errors on the array
If a mirror has subMirrors in trouble but the disk looks OK (format) you
can try to rebuild the mirror with:
Veritas Volume
Manager
- vxassist [-g <dg>] make
<volname><size><attribute>= <value> - Attribut is bijv.: Layout= (no)mirror|raid5|(no)stripe
- vxedit [-g <dg>] –rf rm <name>
- vxassist [-g <dg>] mirror <volname>
- Enable mirroring
- vxdisk list
- List all disk
- vxdg [ list | import | deport ]
- List diskgroups, claim it for your node or release if
- vxedit -g <group> set failing=off <disk>
- Clear the ‘failing flag’ for <disk>. Alarms sometimes are not cleared by Veritas. It is good practice to do this if a disk is reported failing. If the problem reoccurs the flag will be set again and hardware analysis is needed.
- vxdump 0fu
- Dump veritas file system (syntax similar to ufsdump)
- vxprint -dv
- Veritas DiskGroups and defined Volumes
- vxprint -g <diskgroup> -dl
- Display extended information (-l) about the media errors (-d) in a diskgroup (-g)
- vxtask -l list
- List running veritas tasks (synchonization)
- vxmend
- Simple repair tool
- vea
- Java GUI for Veritas (/opt/VRTS/bin/vea)
- vxstat -g <diskgroup>
- Display diskgroup statistics.
- fsadm -ED <filesystem>
- Report extend and disk fragmentation on a filesystem
- fsadm -ed <filesystem>
- Disk and extend defragmentation
Move disk group to another system |
|
Command | Comment |
vxprint hmQf -g <dg> <vol> > file | Diskgroep gegevens naar file |
vxmake -d <file> | Maak diskgroep a.d.v. file |
vxvol start <volname> | Start de diskgroep |
I’m very unsure about the tables below, check everything
Step by Step adding veritas filesystems | ||
Action | Command | Remarks |
Init disk-drive | vxdisksetup -i <disk> | Put slice 3 and 4 on the disk. 3 is the private region (1MB config data), 4 is the public region (the rest of the disk |
Put disk in group | vxdg init <diskgrp> <disk>, <disk>…. vgdg -g <diskgrp> adddisk <disk> |
Init defines a new group |
Create volume | vxassist -g <diskgrp> make <volname> 100M layout=stripe | Creates a 100MB striped volume in diskgrp |
put filesystem on volume | mkfs | |
Create mountpoint, mount and modify /etc/fstab |
Step by Step configuration with volume manager |
|
Action | Command |
Create subdisk | vxmake |
Create plex | vxmake |
Assosiate subdisk with plex | |
Create volume | |
Associate plex with volume | |
Start volume | vxmake -U gen |
Vxvol init clean <volname> |
File System Perfomance SUN
Minfree is calculated as 64M/ <Partition size> * 100%
1% < minfree < 10%
With volume manager minfree = 1%
Ufs uses datablock of 8 fragments (1K each)
Datablocks are filled from front to rear, party filled bock are written
from rear to front. If a file grows in partly filled datablock, the other
file-parts in the bock are moved to another datablock.
Ufs writes <maxconfig> datablock at a time (Filesystem I/O
clustersize) default = 7
Random access file-systems best have maxconfig = 1 Or calculate as NCOL *
(Chunksize)/Blocksize