When a server is initialized during its first run, a structure at a
fixed RVM address is initialized, from which all data access is
initiated. The structure is of type coda_recoverable_segment
coda_globals.h. For volume handling it holds the
We will now explore how the VolHead structures lead to other volume structures. There are many data structures linking these things together:
camprivate.h. An RVM structure
containing a structure VolumeHeader and VolumeData.
camprivate.h. An RVM structure,
points to VolumeDiskData and points to SmallVnodeLists (with
some contstants related to these, and to BigVnodeLists.
volume.h. Contains the
volumeid, type and parentid.
defined in volume.h. An RVM structure holding the persistent data associated with the volume.
These RVM structures are copied into VM when a volume is accessed - we will describe this in detail. In VM we have a hash table VolumeHashTable for Volume structures, with hash key the VolumeId. This is used in conjunction with a doubly linked list volumeLRU of volHeader structures, which was probably created to avoid keeping all volHeader's in VM, since the latter are large.
volume.h. This structure is the
principal access point to a volume. These VM structures are held in a
hash table. It contains quite a lot of information, such as a pointer to
a volHeader (which has the cached RVM data), Device, partition,
vol_index, vnodeIndex, locks and other run time data.
volume.h. A VM structure sitting on
a dlist, with a backpointer to a Volume. Contains a
VolumeDiskData structure. This is the VM cached copy of the RVM
Notice that in RVM volume's are identified principally by their index, which is the index in the static VolumeList array of volHead structures. Otherwise volumes are mostly accessed by their volume id. To map an index to a volumeid proceeds through the VolumeHeader structures held in the VolumeList inside the volHead structures.
The reverse mapping, to get an index from a VolumeId is done
through an auxiliary hashtable VolTable of type vhashtab,
It is informative to know the sizes of all these structures:
The VInitVolumePackage sets up a lot of other structures related to volumes and vnodes.
calloc a sequence of (normally 50) volHeader's, then call ReleaseVolumeHeader to put it at the head of the volumeLRU.
sets up the VolTable, hashing volid's to get the index in the rvm VolumeList.
Hash table used to lookup volumes by id. It stores pointers to the Volume structure
setup of the vnode VM lru caches for small and large vnodes. Store smmary information in the VnodeClassInfoArray for both small and large vnodes. The way to reach allocated vnode arrays is through the VnodeClassInfoArray.
go through the VolumeList and assign VM resolution log memory for every volume; store the pointer in VolLog[i]. The number of rlentries assigned is stored in:
It is not yet clear if RVM resolution is using this.
See if there is a VLDB.
Find server partitions.
This analyzes all inodes on a Coda server partition and matches data against directory contents. After this has completed, the volume InUse bit and NeedsSalvage bit are cleared (stored in the disk data).
This interface is discussed below.
Now iterate through all volumes and call VAttachVolumeById. The details a far from clear, but the intent is to add the information for each volume to the VM hash tables.
write out the
This variable is now set to 1.
This code is sheer madness.
Attaching volumes is the following process. First a VGetVolume is done. If this return a volume, it has found it in the hash table. If that volume is in use, it is detached with VDetachVolume (error here is ignored). Attaching is not supposed to happen with anything in the hashtable already.
Next the VolumeHeader is extracted from RVM. If the program is running as a volumeUtility then FSYNC_askfs is used to request the volume. We continue to chech the partition and call attach2.
Attach2 allocated a new Volume structure, initializes the locks in this structure and fills in the partition information. A VolHeader is found from the LRU list and the routine VolDiskInfoById reads in the RVM data - this can only go wrong if the volume id cannot be matched to a valid index. If the needs salvage field is set, we return the NULL pointer (leaving all memory allocated). Attach2 continues by checking for a variety of bad conditions, such as the volume apparently having crashed. If it is blessed, in service and the salvage flag is not set, then we are ready to go and put the volume in the hash table. If the volume is writable the bitmaps for vnodes are filled in.
DetachVolume is also very complicated. It starts by taking the volume out of the hash table (forcefully). This means that VGetVolume will no longer find it.
The shuttingDown flag is set to one, and VPutVolume is called. This frees the volume and LWP_signals people waiting on the VPutVolume condition. [It is not clear that any process is waiting, since they normally only appear to wait when the goingOffline flag is set, not when the shuttingDown flag is set.]
There are numerous flags indicating the state of volumes:
Cleared by attach2, but only if volume is blessed, not needsSalvaged and inService.
The latter flag has some use in the volume utility package, not much else.
Taking a volume offline (as in Voffline means writing the volume header to disk, the inUse bit turned off. A copy is kept around in VM.
Set by Voffline
used in VDetachVolume.
Set by VOffline. Clear by VPutVolume, VForceOffline and attach2. In this case VPutVolume sets inUse to 0, and VForceOffline does that and sets the needsSalvaged flag.
Heavily manipulated by volume utilities while creating volumes, backing them up etc. Probably indicates that the volume is not in an internally consistent state.
This is an error condition. Cleared by VolSalvage, set by VForceOffline.
Requests for volume operations, such as VGetVolume can come from:
The FSYNC package allows a volume utility to register itself. The call VConnectFS is made during VInitVolutil and makes available an array for the calling thread, named OffLineVolumes. This array is cleaned up during VDisconnectFS and contains a list of volumes that have been requested to be offline.
The requests that can be made are FSYNC_ON to get a volume back online. This clears the spot in the OffLineVolumes, calls VAttachVolume and finally writes out changes by calling VPutVolume.
The other request is FSYNC_OFF and FSYNC_NEEDVOLUME. If the volume is not yet in the threads OffLineVolumes then a spot is found for the volume. If this is not for cloning the volume is marked as BUSY in the specialStatus field of the Volume structure. In VOffline the volume header is written to disk, with the inUse bit turned off. A copy of the header is maintained in memory, however (which is why this is VOffline, not VDetach).
The FSYNC package can also watch over relocation sites. This is not functional anymore, and should probably be removed.
This doubly linked list of volHeader's is one of the more confusing areas in the code. This is the interface between VM and RVM data, and the invariants are not very clear.
See if there is a volHeader available for the volume passed.
Assign a volHeader to the Volume structure. This routine does not fill in the data.
The when a volume is needed, a call is made to VGetVolume. This routine is given a volumeid to search for and finds the Volume structure in the VolumeHashTable. If this is not found it returns VNOVOL.
If there are users of this volume already, then we are guaranteed that the VolumeHeader structure for this volume is available too. Perhaps the header field in the volume structure still exists, otherwise a volHeader must be found in the LRU, which possibly involves writing an old one back to RVM.
If the header field in the volume structure is not null we are ok, if it is null we would like to use the first available header in the volumeLRU list. This is done by checking the back pointer in the volHeader.
The start of this is found in vol-create.cc in the volutil directory.