All Classes and Interfaces

Class
Description
An abstract implementation of ListenableFuture, intended for advanced users only.
Class to pack an AclEntry into an integer.
Feature that represents the ACLs of the inode.
AclStorage contains utility methods that define how ACL data is stored in the namespace.
Active state of the namenode.
This exception collects all IOExceptions thrown when adding block pools and scanning volumes.
Helper methods for CacheAdmin/CryptoAdmin/StoragePolicyAdmin
Protocol between the Namenode and the Datanode to read the AliasMap used for Provided storage.
AliasMapProtocolServerSideTranslatorPB is responsible for translating RPC calls and forwarding them to the internal InMemoryAliasMap.
A class that can be used to schedule an asynchronous check on a given Checkable.
Until we migrate to log4j2, use this appender for namenode audit logger as well as datanode and namenode metric loggers with log4j properties, if async logging is required with RFA.
A FileOutputStream that has the property that it will only show up at its destination once it has been entirely written and flushed to disk.
Interface defining an audit logger.
Subclass of AuthenticationFilter that obtains Hadoop-Auth configuration for webhdfs.
Filter initializer to initialize AuthFilter.
Extending AutoCloseableLock such that the users can use a try-with-resource syntax.
Space balanced block placement policy.
Space balanced rack fault tolerant block placement policy.
A DN volume choosing policy which takes into account the amount of free space on each of the available volumes when considering where to assign a new replica allocation.
Extension of FSImage for the backup node.
BackupNode.
 
The balancer is a tool that balances disk space usage on an HDFS cluster when some datanodes become full or when new empty nodes join the cluster.
Balancer bandwidth command instructs each datanode to change its value for the max amount of network bandwidth it may use during the block balancing operation.
 
 
The full set of protocols used by the Balancer.
Class that represents a file on disk which stores a single long value, but does not make any effort to make it truly durable.
BinaryEditsVisitor implements a binary EditsVisitor
Implements TrustedChannelResolver to trust ips/host/subnets based on a blackList.
Interface used to load provided blocks.
An abstract class used to read and write block maps for provided blocks.
An abstract class that is used to read BlockAliases for provided blocks.
reader options.
An abstract class used as a writer for the provided block map.
writer options.
This interface is used by the block manager to expose a few characteristics of a collection of Block/BlockUnderConstruction.
A BlockCommand is an instruction to a datanode regarding some blocks under its control.
Dispatching block replica moves between datanodes to satisfy the storage policy.
A BlockECReconstructionCommand is an instruction to a DataNode to reconstruct a striped block group with missing blocks.
Block and targets pair
A BlockIdCommand is an instruction to a datanode regarding some blocks under its control.
BlockIdManager allocates the generation stamps and the block ID.
For a given block (or an erasure coding block group), BlockInfo class maintains 1) the BlockCollection it is part of, and 2) datanodes where the replicas of the block, or blocks belonging to the erasure coding block group, are stored.
Subclass of BlockInfo, used for a block with replication scheme.
Subclass of BlockInfo, presenting a block group in erasure coding.
This class contains datanode storage information and block index in the block group.
Key used for generating and verifying block tokens
 
 
 
Keeps information related to the blocks stored in the Hadoop cluster.
Used to inject certain faults for testing.
This class represents status from a block movement task.
Interface for notifying about block movement attempt completion.
Block movement status code.
Interface for implementing different ways of block moving approaches.
 
This interface is used for choosing the desired number of targets for placing block replicas.
 
The class is responsible for choosing the desired number of targets for placing block replicas.
 
The class is responsible for choosing the desired number of targets for placing block replicas.
The class is responsible for choosing the desired number of targets for placing block replicas on environment with node-group layer.
The class is responsible for choosing the desired number of targets for placing block replicas that honors upgrade domain policy.
 
 
An implementation of @see BlockPlacementStatus for
An implementation of @see BlockPlacementStatus for
A block pool slice represents a portion of a block pool stored on a volume.
Manages storage for the set of BlockPoolSlices which share a particular block pool id, on this DataNode.
Manages a BlockTokenSecretManager per block pool.
BlockRecoveryCommand is an instruction to a data-node to recover the specified blocks.
This is a block with locations from which it should be recovered and the new generation stamp, which the block will have after successful recovery.
 
This class handles the block recovery work commands.
The context of the block report.
 
 
Blocks movements status handler, which can be used to collect details of the completed block movements.
This class represents, the blocks for which storage movements has done by datanodes.
This is an interface used to retrieve statistic information related to block management.
A monitor class for checking whether block storage movements attempt completed or not.
A BlockStorageMovementCommand is an instruction to a DataNode to move the given set of blocks to specified target DataNodes to fulfill the block storage policy.
Stores block to storage info that can be used for block movement.
A Class to track the block collection IDs (Inode's ID) for which physical storage movement needed as per the Namespace and StorageReports from DN.
Info for directory recursive scan.
This class is used to track the completion of block movement future tasks.
A collection of block storage policies.
Maintains an array of blocks and their corresponding storage IDs.
A class to keep track of a block and its locations
 
BlockTokenSecretManager can be instantiated in 2 modes, master mode and worker mode.
Represents the under construction feature of a Block.
Tool which allows the standby node's storage directories to be bootstrapped by copying the latest namespace snapshot from the active namenode.
Base class for BPServiceActor class Issued by BPOfferSerivce class to tell BPServiceActor to take several actions.
 
Wrapper for byte[] to use byte[] as key in HashMap
This class implements command-line operations on the HDFS Cache.
Represents a cached block.
Namenode class that tracks state related to a cached path.
The Cache Manager handles caching on DataNodes.
 
A CachePool describes a set of cache resources being managed by the NameNode.
 
Scans the namesystem, scheduling blocks to be cached as appropriate.
Cancels a running plan.
Provides a simple interface where one thread can mark an operation for cancellation, and another thread can poll for whether the cancellation has occurred.
A Checkable is an object whose health can be probed by invoking its Checkable.check(K) method.
Checkpoint command.
 
Utility class to faciliate some fault injection tests for the checkpointing process.
A unique signature intended to identify checkpoint transactions.
holder class that holds checksum bytes and the length in a block at which the checksum bytes end ex: length = 1023 and checksum is 4 bytes which is for 512 bytes, then the checksum applies for the last chunk, or bytes 512 - 1023
Implementation for protobuf service that forwards requests received on ClientDatanodeProtocolPB to the ClientDatanodeProtocol server implementation.
This class is used on the server side.
ClusterConnector interface hides all specifics about how we communicate to the HDFS cluster.
This class manages datanode configuration using a json file.
Common interface for command handling.
Connector factory creates appropriate connector based on the URL.
Const Counters for an enum type.
An exception class for modification on ConstEnumCounters.
The content types such as file, directory and symlink to be computed.
The counter to be computed for content types such as file, directory and symlink, and the storage type usage such as SSD, DISK, ARCHIVE.
 
 
An interface for the communication between SPS and Namenode module.
Stores information about all corrupt blocks in the File System.
The corruption reason code
This class implements crypto command-line operations.
Provide an cyclic Iterator for a NavigableMap.
DataNode is a class (and program) that stores a set of blocks for a DFS deployment.
 
 
This class implements the logic to track decommissioning and entering maintenance nodes, ensure all their blocks are adequately replicated before they are moved to the decommissioned or maintenance state.
Checks to see if datanodes have finished DECOMMISSION_INPROGRESS or ENTERING_MAINTENANCE state.
Manages decommissioning and maintenance state for DataNodes.
This abstract class provides some base methods which are inherited by the DatanodeAdmin BackOff and Default Monitors, which control decommission and maintenance mode.
Interface used to implement a decommission and maintenance monitor class, which is instantiated by the DatanodeAdminManager class.
The Datanode cache Manager handles caching of DatanodeStorageReport.
Base class for data-node command.
This class extends the DatanodeInfo class with ephemeral information (eg health, capacity, what blocks are associated with the Datanode) that is private to the Namenode, ie this class is not exposed to clients.
Block and targets pair
A list of CachedBlock objects on this datanode.
 
This class detects and maintains DataNode disk outliers and their latencies for different ops (metadata, read, write).
This structure is a wrapper over disk latencies.
Used for injecting faults in DFSClient and DFSOutputStream tests.
Data node HTTP Server Class.
Since the DataNode HTTP server is not implemented in terms of the servlet API, it takes some extra effort to obtain an instance of the filter.
 
 
Enums for features that change the layout version.
Protocol used by a DataNode to send lifeline messages to a NameNode.
This class is the client side translator to translate the requests made on DatanodeLifelineProtocol interfaces to the RPC server implementing DatanodeLifelineProtocolPB.
Protocol used by a DataNode to send lifeline messages to a NameNode.
Implementation for protobuf service that forwards requests received on DatanodeLifelineProtocolPB to the DatanodeLifelineProtocol server implementation.
Use for manage a set of lock for datanode.
Acquire block pool level and volume level lock first if you want to acquire dir lock.
Manage datanodes, include decommission and other activities.
 
This class is for maintaining the various DataNode statistics and publishing them through the metrics interfaces.
This is the JMX management interface for data node information.
This class maintains DataNode peer metrics (e.g. numOps, AvgTime, etc.) for various peer operations.
Protocol that a DFS datanode uses to communicate with the NameNode.
This class is the client side translator to translate the requests made on DatanodeProtocol interfaces to the RPC server implementing DatanodeProtocolPB.
 
 
DatanodeRegistration class contains all information the name-node needs to identify and verify a data-node when it contacts the name-node.
Datanode statistics
A Datanode has one or more storages.
Create UGI from the request for the WebHDFS requests for the DNs.
Provide utility methods for Datanode.
This class is for maintaining Datanode Volume IO related statistics and publishing them through the metrics interfaces.
Class for maintain a set of lock for fsDataSetImpl.
This interface is used to generate sub lock name for a blockid.
A class that encapsulates running disk checks against each volume of an FsDatasetSpi and allows retrieving a list of failed volumes.
A callback interface that is supplied the result of running an async disk check on multiple volumes.
Data storage information file.
VolumeBuilder holds the metadata (e.g., the storage directories) of the prepared volume returned from DataStorage.prepareVolume(DataNode, StorageLocation, List).
a class to throttle the data transfers.
This class implements debug operations on the HDFS command-line.
This class provides an interface for Namenode and Router to Audit events information.
A default implementation of the INodeAttributesProvider
Fetch a DelegationToken from the current Namenode and store it in the specified file.
A HDFS specific delegation token secret manager.
 
A simple wrapper around UTF8.
Utility class for tracking descent into the structure of the Visitor class (ImageVisitor, EditsVisitor etc.)
This class provides some DFS administrative access shell commands.
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
This class contains constants for configuration keys and default values used in hdfs.
Class to extend HAAdmin to do a little bit of HDFS-specific configuration.
The HDFS specific network topology class.
A base class for the servlets in DFS.
The HDFS-specific representation of a network topology inner node.
 
Represent one of the NameNodes configured in the cluster.
Comparator for sorting DataNodeInfo[] based on decommissioned and entering_maintenance states.
Comparator for sorting DataNodeInfo[] based on slow, stale, entering_maintenance, decommissioning and decommissioned states.
 
Diff<K,E extends Diff.Element<K>>
The difference between the current state and a previous state of a list.
Containing exactly one element.
An interface for the elements in a Diff.
An interface for passing a method in order to process elements.
Undo information for some operations such as delete(E) and Diff.modify(Element, Element).
This interface defines the methods used to store and manage InodeDiffs.
Resizable-array implementation of the DiffList interface.
SkipList is an implementation of a data structure for storing a sorted list of Directory Diff elements, using a hierarchy of linked lists that connect increasingly sparse subsequences(defined by skip interval here) of the diffs.
Periodically scans the data directories for block and block metadata files.
Helper class for compiling block info reports per block pool.
Helper class for compiling block info reports from report compiler threads.
A directory with this feature is a snapshottable directory, where snapshots can be taken.
Quota feature for INodeDirectory.
 
Feature used to store and process the snapshot diff information for a directory.
The difference of an INodeDirectory between two snapshots.
A list of directory diffs.
This exception is thrown when a datanode tries to register or communicate with the namenode when it does not appear on the list of included nodes, or has been specifically excluded.
Worker class for Disk Balancer.
BlockMover supports moving blocks across Volumes.
Actual DataMover class for DiskBalancer.
Holds source and dest volumes UUIDs and their BasePaths that disk balancer will be operating against.
DiskBalancer is a tool that can be used to ensure that data is spread evenly across volumes of same storage type.
DiskBalancerCluster represents the nodes that we are working against.
Constants used by Disk Balancer.
DiskBalancerDataNode represents a DataNode that exists in the cluster.
Disk Balancer Exceptions.
Results returned by the RPC layer of DiskBalancer.
DiskBalancerVolume represents a volume in the DataNode.
DiskBalancerVolumeSet is a collection of storage devices on the data node which are of similar StorageType.
When kernel report a "Input/output error", we use this exception to represents some corruption(e.g. bad disk track) happened on some disk file.
Dispatching block replica moves between datanodes.
A class for keeping track of block locations in the dispatcher.
 
A class that keeps track of a datanode.
 
Simple class encapsulating all of the configuration that the DataNode loads at startup time.
 
A DropSPSWorkCommand is an instruction to a datanode to drop the SPSWorker's pending block storage movement queues.
CLI for the erasure code encoding operations.
This interface defines the methods to get status pertaining to blocks of type BlockType.STRIPED in FSNamesystem of a NameNode.
Class for verifying whether the cluster setup can support all enabled EC policies.
An implementation of the abstract class EditLogInputStream, which reads edits from a file.
An implementation of the abstract class EditLogOutputStream, which stores edits in a local file.
Thrown when there's a failure to read an edit log op from disk when loading edits.
A generic abstract class to support reading edits log data from persistent storage.
A generic abstract class to support journaling of edits logs into a persistent storage.
EditLogTailer represents a thread which periodically reads from edits journals and applies the transactions contained within to a given FSNamesystem.
A double-buffer for edits.
Used to inject certain faults for testing.
Manages the list of encryption zones in the filesystem.
EnumCounters<E extends Enum<E>>
Counters for an enum type.
EnumDoubles<E extends Enum<E>>
Similar to EnumCounters except that the value type is double.
This manages erasure coding policies predefined and activated in the system.
ErasureCodingWorker handles the erasure coding reconstruction work commands.
A ErrorReportAction is an instruction issued by BPOfferService to BPServiceActor about a particular block encapsulated in errorMessage.
Handle exceptions.
executes a given plan.
Exit status - The values associated with each exit status is directly mapped to the process's exit code in command line.
Object for passing block keys
Expose the ExternalSPS metrics.
This class handles the external SPS block movements.
This class used to connect to Namenode and gets the required information to SPS from Namenode state.
Used to inject certain faults for testing.
This class is to scan the paths recursively.
This is the JMX management interface for ExternalSPS information.
This class starts and runs external SPS service.
Injects faults in the metadata and data related operations on datanode volumes.
If a previous user of a resource tries to use a shared resource, after fenced by another user, this exception is thrown.
Response to a journal fence request.
An interface for scanning the directory recursively and collect files under the given directory.
The difference of an INodeFile between two snapshots.
A list of FileDiffs for storing snapshot data.
This class abstracts out various file IO operations performed by the DataNode and invokes profiling (for collecting stats) and fault injection (for testing) event hooks before and after each file IO.
Lists the types of file system operations.
Journal manager for the common case of edits files being written to a storage directory.
Record of an edit log that has been located and had its filename parsed.
This class is used to represent provided blocks that are file regions, i.e., can be described using (path, offset, length).
Feature for under-construction file.
Feature for file with snapshot-related information.
A BlockCommand is an instruction to a datanode to register with the namenode.
This class is used for provided replicas that are finalized.
This class describes a replica that has been finalized.
Splitting the global FSN lock into FSLock and BMLock.
Fast and accurate class to tell how much space HDFS is using.
The builder class.
This class is used in Namesystem's web server to do fsck on namenode.
This interface is used for retrieving the load related statistics of the cluster.
Manages caching for an FsDatasetImpl by using the mmap(2) and mlock(2) system calls to lock blocks into memory.
A factory for creating FsDatasetImpl objects.
This Interface defines the methods to get the status of a the FSDataset of a data node.
This is a service provider interface for the underlying storage that stores replicas for a data node.
A factory for creating FsDatasetSpi objects.
It behaviors as an unmodifiable list of FsVolume.
Utility methods.
 
Both FSDirectory and FSNamesystem manage the state of the namespace.
 
 
FSEditLog maintains a log of the namespace modifications.
 
Stream wrapper that keeps track of the current stream position.
Helper classes for reading the ops from an InputStream.
 
Class for reading editlog ops from a stream
Class for writing editlog ops
Op codes for edits file
FSImage handles checkpointing and logging of the namespace edits.
Simple container class that handles support for compressed fsimage files.
Contains inner classes for reading or writing the on-disk format for FSImages.
A one-shot class responsible for loading an image.
 
 
 
 
Loading snapshot related information from protobuf based FSImage
Saving snapshot related information to protobuf based FSImage
Utility class to read / write fsimage in protobuf format.
 
 
 
 
 
Supported section name.
 
Protobuf type hadoop.hdfs.fsimage.CacheManagerSection
Protobuf type hadoop.hdfs.fsimage.CacheManagerSection
 
Protobuf type hadoop.hdfs.fsimage.ErasureCodingSection
Protobuf type hadoop.hdfs.fsimage.ErasureCodingSection
 
Protobuf type hadoop.hdfs.fsimage.FileSummary
Protobuf type hadoop.hdfs.fsimage.FileSummary
index for each section
index for each section
 
 
This section records information about under-construction files for reconstructing the lease map.
This section records information about under-construction files for reconstructing the lease map.
Protobuf type hadoop.hdfs.fsimage.FilesUnderConstructionSection.FileUnderConstructionEntry
Protobuf type hadoop.hdfs.fsimage.FilesUnderConstructionSection.FileUnderConstructionEntry
 
 
This section records the children of each directories NAME: INODE_DIR
This section records the children of each directories NAME: INODE_DIR
A single DirEntry needs to fit in the default PB max message size of 64MB.
A single DirEntry needs to fit in the default PB max message size of 64MB.
 
 
Protobuf type hadoop.hdfs.fsimage.INodeReferenceSection
Protobuf type hadoop.hdfs.fsimage.INodeReferenceSection
Protobuf type hadoop.hdfs.fsimage.INodeReferenceSection.INodeReference
Protobuf type hadoop.hdfs.fsimage.INodeReferenceSection.INodeReference
 
 
Permission is serialized as a 64-bit long. [0:24):[25:48):[48:64) (in Big Endian).
Protobuf type hadoop.hdfs.fsimage.INodeSection.AclFeatureProto
Protobuf type hadoop.hdfs.fsimage.INodeSection.AclFeatureProto
 
Permission is serialized as a 64-bit long. [0:24):[25:48):[48:64) (in Big Endian).
under-construction feature for INodeFile
under-construction feature for INodeFile
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.INode
Protobuf type hadoop.hdfs.fsimage.INodeSection.INode
Protobuf enum hadoop.hdfs.fsimage.INodeSection.INode.Type
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeDirectory
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeDirectory
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeFile
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeFile
 
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeSymlink
Protobuf type hadoop.hdfs.fsimage.INodeSection.INodeSymlink
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.QuotaByStorageTypeEntryProto
Protobuf type hadoop.hdfs.fsimage.INodeSection.QuotaByStorageTypeEntryProto
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.QuotaByStorageTypeFeatureProto
Protobuf type hadoop.hdfs.fsimage.INodeSection.QuotaByStorageTypeFeatureProto
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.XAttrCompactProto
Protobuf type hadoop.hdfs.fsimage.INodeSection.XAttrCompactProto
 
Protobuf type hadoop.hdfs.fsimage.INodeSection.XAttrFeatureProto
Protobuf type hadoop.hdfs.fsimage.INodeSection.XAttrFeatureProto
 
 
Name: NS_INFO
Name: NS_INFO
 
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection.DelegationKey
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection.DelegationKey
 
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection.PersistToken
Protobuf type hadoop.hdfs.fsimage.SecretManagerSection.PersistToken
 
 
This section records information about snapshot diffs NAME: SNAPSHOT_DIFF
This section records information about snapshot diffs NAME: SNAPSHOT_DIFF
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.CreatedListEntry
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.CreatedListEntry
 
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.DiffEntry
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.DiffEntry
Protobuf enum hadoop.hdfs.fsimage.SnapshotDiffSection.DiffEntry.Type
 
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.DirectoryDiff
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.DirectoryDiff
 
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.FileDiff
Protobuf type hadoop.hdfs.fsimage.SnapshotDiffSection.FileDiff
 
 
This section records the information about snapshot NAME: SNAPSHOT
This section records the information about snapshot NAME: SNAPSHOT
Protobuf type hadoop.hdfs.fsimage.SnapshotSection.Snapshot
Protobuf type hadoop.hdfs.fsimage.SnapshotSection.Snapshot
 
 
This section maps string to id NAME: STRING_TABLE
This section maps string to id NAME: STRING_TABLE
Protobuf type hadoop.hdfs.fsimage.StringTableSection.Entry
Protobuf type hadoop.hdfs.fsimage.StringTableSection.Entry
 
 
Static utility functions for serializing various pieces of data in the correct format for the FSImage file.
 
For validating FSImage.
 
Directory has too many items
Path component length is too long
FSNamesystem is a container of both transient and persisted name-space state, and does all the book-keeping work on a NameNode.
Mimics a ReentrantReadWriteLock but does not directly implement the interface so more sophisticated locking capabilities and logging/metrics are possible.
This Interface defines the methods to get the status of a the FSNamesystem of a name node.
 
Class that helps in checking file system permission.
FSTreeTraverser traverse directory recursively and process files in batches.
Class will represent the additional info required for traverse.
The underlying volume used to store replica.
Filter for block file names stored on the file system volumes.
This class is to be used as a builder for FsVolumeImpl objects.
This holds volume reference count as AutoClosable resource.
This is an interface for the underlying volume.
BlockIterator will return ExtendedBlock entries from a block pool in this volume.
Tracks the files and other information related to a block on the disk Missing file is indicated by setting the corresponding member to null.
Context for the Checkable.check(K) call.
A GenerationStamp is a Hadoop FS primitive, identified by a long.
Tool for getting configuration information from a configuration file.
HDFS implementation of a tool for getting the groups which a given user belongs to.
This servlet is used in two cases: The QuorumJournalManager, when reading edits, fetches the edit streams from the journal nodes. During edits synchronization, one journal node will fetch edits from another journal node.
 
Greedy Planner is a simple planner that computes the largest possible move at any point of time given a volumeSet.
Context that is to be used by HAState for getting/setting the current state and performing required operations.
Namenode base state to implement state machine pattern.
 
 
Protobuf type hadoop.hdfs.ActiveNodeInfo
Protobuf type hadoop.hdfs.ActiveNodeInfo
 
Extension of AuditLogger.
 
DtFetcher is an interface which permits the abstraction and separation of delegation token fetch implementaions across different packages and compilation units.
PolicyProvider for HDFS protocols.
Some handy internal HDFS constants
States, which a block can go through while it is under construction.
Defines the NameNode role.
Type of the node
Block replica states, which it can go through while being constructed.
Startup options for rolling upgrade.
Startup options
 
Help Command prints out detailed help about each command.
A Holder is simply a wrapper around some other object.
This interface abstracts how datanode configuration is managed.
This class manages the include and exclude files for HDFS.
An HTTP filter that can filter requests based on Hosts.
 
The HostSet allows efficient queries on matching wildcard addresses.
 
 
Signals that a snapshot is ignored.
Thrown when upgrading from software release that doesn't support reserved path to software release that supports reserved path, and when there is reserved path name in the Fsimage.
This class is used in Namesystem's jetty to retrieve/upload a file Typically used by the Secondary NameNode to retrieve image and edit file for periodic checkpointing in Non-HA deployments.
The exception is thrown when file system state is inconsistent and is not recoverable.
The exception is thrown when external version does not match current version of the application.
InMemoryAliasMap is an implementation of the InMemoryAliasMapProtocol for use with LevelDB.
CheckedFunction is akin to Function but specifies an IOException.
Protocol used by clients to read/write data about aliases of provided blocks for an in-memory implementation of the BlockAliasMap.
The result of a read from the in-memory aliasmap.
This class is the client side translator to translate requests made to the InMemoryAliasMapProtocol interface to the RPC server implementing AliasMapProtocolPB.
InMemoryLevelDBAliasMapClient is the client for the InMemoryAliasMapServer.
InMemoryLevelDBAliasMapServer is the entry point from the Namenode into the InMemoryAliasMap.
We keep an in-memory representation of the file/block hierarchy.
Information used for updating the blocksMap when deleting files.
The blocks whose replication factor need to be updated.
Information used to record quota usage delta.
Context object to record blocks and inodes that need to be reclaimed
 
The AccessControlEnforcer allows implementations to override the default File System permission checking logic enforced on a file system object
 
 
The attributes of an inode.
A read-only copy of the inode attributes.
For validating FSImages.
 
Directory INode class.
A pair of Snapshot and INode objects.
The attributes of an inode.
 
A copy of the inode directory attributes
I-node for closed file.
The attributes of a file.
A copy of the inode file attributes
An id which uniquely identifies an inode.
Storing all the INodes and maintaining the mapping between INode ID and INode.
A reference to an inode.
 
An anonymous reference with reference count.
A reference with a fixed name.
For validating INodeReference subclasses.
Contains INodes information resolved from a given path.
An INode representing a symbolic link.
INode with additional fields including id, name, permission, access time and modification time.
Translates from edit log ops to inotify events.
An inter-datanode protocol for updating generation stamp
 
Implementation for protobuf service that forwards requests received on InterDatanodeProtocolPB to the InterDatanodeProtocol server implementation.
This class is the client side translator to translate the requests made on InterDatanodeProtocol interfaces to the RPC server implementing InterDatanodeProtocolPB.
Protocol used to communicate between JournalNode for journalsync.
Protocol used to communicate between journal nodes for journal sync.
 
Protobuf type hadoop.hdfs.qjournal.GetStorageInfoRequestProto
Protobuf type hadoop.hdfs.qjournal.GetStorageInfoRequestProto
 
Protobuf service hadoop.hdfs.qjournal.InterQJournalProtocolService
 
 
 
Implementation for protobuf service that forwards requests received on InterQJournalProtocolPB to the InterQJournalProtocol server implementation.
This class is the client side translator to translate the requests made on InterQJournalProtocol interfaces to the RPC server implementing InterQJournalProtocolPB.
This exception is thrown when a datanode sends a full block report but it is rejected by the Namenode due to an invalid lease (expired or otherwise).
Indicates that SASL protocol negotiation expected to read a pre-defined magic number, but the expected value was not seen.
Channel to a remote JournalNode using Hadoop IPC.
Used by Load Balancers to find the active NameNode.
ItemInfo is a file info object for which need to satisfy the policy.
tool to get data from NameNode or DataNode using MBeans currently the following MBeans are available (under hadoop domain): hadoop:service=NameNode,name=FSNamesystemState (static) hadoop:service=NameNode,name=NameNodeActivity (dynamic) hadoop:service=NameNode,name=RpcActivityForPort9000 (dynamic) hadoop:service=DataNode,name=RpcActivityForPort9867 (dynamic) hadoop:name=service=DataNode,FSDatasetState-UndefinedStorageId663800459 (static) hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId-520845215 (dynamic) implementation note: all logging is sent to System.err (since it is a command line tool)
A JournalNode can manage journals for several clusters at once.
Used for injecting faults in QuorumJournalManager tests.
Information that describes a journal
A JournalManager is responsible for managing a single place of storing edit logs.
Indicate that a journal is cannot be used to load a certain range of edits.
The JournalNode is a daemon which allows namenodes using the QuorumJournalManager to log and retrieve edits stored remotely.
Encapsulates the HTTP server started by the Journal Service.
This is the JMX management interface for JournalNode information
 
A Journal Sync thread runs through the lifetime of the JN.
Exception indicating that a call has been made to a JournalNode which is not yet formatted.
 
Protocol used to journal edits to a remote node.
Protocol used to journal edits to a remote node.
Implementation for protobuf service that forwards requests received on JournalProtocolPB to the JournalProtocol server implementation.
This class is the client side translator to translate the requests made on JournalProtocol interfaces to the RPC server implementing JournalProtocolPB.
Manages a collection of Journals.
A connector that understands JSON data cluster models.
JSON Utilities
 
The class provides utilities for key and token management.
 
LayoutFlags represent features which the FSImage and edit logs can either support or not, independently of layout version.
This class tracks changes in the layout version of HDFS.
Enums for features that change the layout version before rolling upgrade is supported.
Feature information.
The interface to be implemented by NameNode and DataNode layout features
The lease that was being used to create this file has expired.
LeaseManager does the lease housekeeping for writing on files.
An input stream with length.
A LevelDB based implementation of BlockAliasMap.
Class specifying reader options for the LevelDBFileRegionAliasMap.
This class is used as a reader for block maps which are stored as LevelDB files.
This class is used as a writer for block maps which are stored as LevelDB files.
Interface for Writer options.
A low memory linked hash set implementation, which uses an array for storing the elements and linked lists for collision resolution.
A low memory linked hash set implementation, which uses an array for storing the elements and linked lists for collision resolution.
This class is used for all replicas which are on local storage media and hence, are backed by files.
 
This class defines a replica in a pipeline, which includes a persistent replica being written to by a dfs client or a temporary replica being replicated by a source datanode or being copied for the balancing purpose.
A tool used to list all snapshottable directories that are owned by the current user.
A tool used to list all snapshottable directories that are owned by the current user.
Represents an HDFS block that is mapped by the DataNode.
Maps block to DataNode cache region.
Creates MappableBlockLoader.
A matcher interface for matching nodes.
Static functions for dealing with files of the same format that the Unix "md5sum" utility writes.
Maps block to memory.
Represents an HDFS block that is mapped to memory by the DataNode.
Context data for an ongoing NameNode metadata recovery process.
Exception thrown when the user has requested processing to stop.
MetricsLoggerTask can be used as utility to dump metrics to log.
MountVolumeMap contains information of the relationship between underlying filesystem mount and datanode volumes.
This window makes sure to keep blocks that have been moved within a fixed time interval (default is 1.5 hour).
A class for keeping track of a block and its locations
 
 
Ignore fields with default values.
File name distribution visitor.
NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS.
Categories of operations supported by the namenode.
Namenode RPC address parameter.
Base class for name-node command.
The class provides utilities for accessing a NameNode.
Thrown when NameNode format fails.
This class provides rudimentary checking of DFS volumes for errors and sub-optimal conditions.
 
Encapsulates the HTTP server started by the NameNode.
 
Enums for features that change the layout version.
This class is for maintaining the various NameNode activity statistics and publishing them through the metrics interfaces.
This is the JMX management interface for namenode information.
Protocol that a secondary NameNode uses to communicate with the NameNode.
Protocol that a secondary NameNode uses to communicate with the NameNode.
The full set of RPC methods implemented by the Namenode.
Implementation for protobuf service that forwards requests received on NamenodeProtocolPB to the NamenodeProtocol server implementation.
This class is the client side translator to translate the requests made on NamenodeProtocol interfaces to the RPC server implementing NamenodeProtocolPB.
Create proxy objects to communicate with a remote NN.
Information sent by a subordinate name-node to the active name-node during the registration process.
NameNodeResourceChecker provides a method - hasAvailableDiskSpace - which will return true if and only if the NameNode has disk space available on all required volumes, and any volume which is configured to be redundant.
This class is responsible for handling all of the RPC calls to the NameNode.
This is the JMX management interface for NameNode status information.
Utility functions for the NameNode.
Web-hdfs NameNode implementation.
NamespaceInfo is returned by the name-node in reply to a data-node handshake.
 
To print the namespace tree recursively for testing. \- foo (INodeDirectory@33dd2717) \- sub1 (INodeDirectory@442172) +- file1 (INodeFile@78392d4) +- file2 (INodeFile@78392d5) +- sub11 (INodeDirectory@8400cff) \- file3 (INodeFile@78392d6) \- z_file4 (INodeFile@45848712)
For visiting namespace trees.
Snapshot and INode.
For visiting any INode.
Namesystem operations.
Map block to persistent memory with native PMDK libs.
Represents an HDFS block that is mapped to persistent memory by the DataNode.
A servlet to print out the network topology.
 
Exception when no edits are available.
One of the NN NameNodes acting as the target of an administrative command (e.g. failover).
 
NNStorage is responsible for management of the StorageDirectories used by the NameNode.
Implementation of StorageDirType specific to namenode storage A Storage directory could be of type IMAGE which stores only fsimage, or of type EDITS which stores edits or of type IMAGE_AND_EDITS which stores both fsimage and edits.
The filenames used for storing the images.
The NNStorageRetentionManager is responsible for inspecting the storage directories of the NN and enforcing a retention policy on checkpoints and edit logs.
 
NodePlan is a set of volumeSetPlans.
Generic class specifying information, which need to be sent to the name-node during the registration process.
Some ut or temp replicaMap not need to lock with DataSetLockManager.
A immutable object that stores the number of live replicas and the number of decommissioned Replicas.
 
This class implements an offline edits viewer, tool that can be used to view edit logs.
 
An implementation of OfflineEditsVisitor can traverse the structure of an Hadoop edits log and respond to each of the structures within the file.
EditsVisitorFactory for different implementations of EditsVisitor
OfflineImageViewer to dump the contents of an Hadoop image file to XML or the console.
OfflineImageViewerPB to dump the contents of an Hadoop image file to XML or the console.
A utility class to help detect resources (nodes/ disks) whose aggregate latency is an outlier within a given set.
This exception is thrown when the name node runs out of V1 (legacy) generation stamps.
A filter to change parameter names to lower cases so that parameter names are considered as case insensitive.
Utilities for converting protobuf classes to and from implementation classes and other helper utilities to help in dealing with protobuf.
Class representing a corruption in the PBImageCorruptionDetector processor.
The PBImageCorruptionDetector detects corruptions in the image.
A PBImageDelimitedTextWriter generates a text representation of the PB fsimage, with each element separated by a delimiter string.
PBImageXmlWriter walks over an fsimage structure and writes out an equivalent XML document that contains the fsimage's components.
 
Class that represents a file on disk which persistently stores a single long value.
Indicates a particular phase of the namenode startup sequence.
Class that implements Plan Command.
Planner interface allows different planners to be created.
Returns a planner based on the user defined tags.
Maps block to persistent memory by using mapped byte buffer.
Represents an HDFS block that is mapped to persistent memory by DataNode with mapped byte buffer.
Manage the persistent memory volumes.
This abstract class is used as a base class for provided replicas.
This class allows us to manage and multiplex between storages local to datanodes, and provided storage.
An abstract DatanodeDescriptor to track datanodes with provided storages.
Protocol used to communicate between QuorumJournalManager and each JournalNode.
Protocol used to journal edits to a JournalNode participating in the quorum journal.
 
acceptRecovery()
acceptRecovery()
 
Protobuf type hadoop.hdfs.qjournal.AcceptRecoveryResponseProto
Protobuf type hadoop.hdfs.qjournal.AcceptRecoveryResponseProto
 
canRollBack()
canRollBack()
 
Protobuf type hadoop.hdfs.qjournal.CanRollBackResponseProto
Protobuf type hadoop.hdfs.qjournal.CanRollBackResponseProto
 
discardSegments()
discardSegments()
 
Protobuf type hadoop.hdfs.qjournal.DiscardSegmentsResponseProto
Protobuf type hadoop.hdfs.qjournal.DiscardSegmentsResponseProto
 
doFinalize()
doFinalize()
 
Protobuf type hadoop.hdfs.qjournal.DoFinalizeResponseProto
Protobuf type hadoop.hdfs.qjournal.DoFinalizeResponseProto
 
doPreUpgrade()
doPreUpgrade()
 
Protobuf type hadoop.hdfs.qjournal.DoPreUpgradeResponseProto
Protobuf type hadoop.hdfs.qjournal.DoPreUpgradeResponseProto
 
doRollback()
doRollback()
 
Protobuf type hadoop.hdfs.qjournal.DoRollbackResponseProto
Protobuf type hadoop.hdfs.qjournal.DoRollbackResponseProto
 
doUpgrade()
doUpgrade()
 
Protobuf type hadoop.hdfs.qjournal.DoUpgradeResponseProto
Protobuf type hadoop.hdfs.qjournal.DoUpgradeResponseProto
 
finalizeLogSegment()
finalizeLogSegment()
 
Protobuf type hadoop.hdfs.qjournal.FinalizeLogSegmentResponseProto
Protobuf type hadoop.hdfs.qjournal.FinalizeLogSegmentResponseProto
 
format()
format()
 
Protobuf type hadoop.hdfs.qjournal.FormatResponseProto
Protobuf type hadoop.hdfs.qjournal.FormatResponseProto
 
getEditLogManifest()
getEditLogManifest()
 
Protobuf type hadoop.hdfs.qjournal.GetEditLogManifestResponseProto
Protobuf type hadoop.hdfs.qjournal.GetEditLogManifestResponseProto
 
getJournalCTime()
getJournalCTime()
 
Protobuf type hadoop.hdfs.qjournal.GetJournalCTimeResponseProto
Protobuf type hadoop.hdfs.qjournal.GetJournalCTimeResponseProto
 
getJournaledEdits()
getJournaledEdits()
 
Protobuf type hadoop.hdfs.qjournal.GetJournaledEditsResponseProto
Protobuf type hadoop.hdfs.qjournal.GetJournaledEditsResponseProto
 
getJournalState()
getJournalState()
 
Protobuf type hadoop.hdfs.qjournal.GetJournalStateResponseProto
Protobuf type hadoop.hdfs.qjournal.GetJournalStateResponseProto
 
Protobuf type hadoop.hdfs.qjournal.HeartbeatRequestProto
Protobuf type hadoop.hdfs.qjournal.HeartbeatRequestProto
 
void response
void response
 
isFormatted()
isFormatted()
 
Protobuf type hadoop.hdfs.qjournal.IsFormattedResponseProto
Protobuf type hadoop.hdfs.qjournal.IsFormattedResponseProto
 
Protobuf type hadoop.hdfs.qjournal.JournalIdProto
Protobuf type hadoop.hdfs.qjournal.JournalIdProto
 
Protobuf type hadoop.hdfs.qjournal.JournalRequestProto
Protobuf type hadoop.hdfs.qjournal.JournalRequestProto
 
Protobuf type hadoop.hdfs.qjournal.JournalResponseProto
Protobuf type hadoop.hdfs.qjournal.JournalResponseProto
 
newEpoch()
newEpoch()
 
Protobuf type hadoop.hdfs.qjournal.NewEpochResponseProto
Protobuf type hadoop.hdfs.qjournal.NewEpochResponseProto
 
The storage format used on local disk for previously accepted decisions.
The storage format used on local disk for previously accepted decisions.
 
prepareRecovery()
prepareRecovery()
 
Protobuf type hadoop.hdfs.qjournal.PrepareRecoveryResponseProto
Protobuf type hadoop.hdfs.qjournal.PrepareRecoveryResponseProto
 
purgeLogs()
purgeLogs()
 
Protobuf type hadoop.hdfs.qjournal.PurgeLogsResponseProto
Protobuf type hadoop.hdfs.qjournal.PurgeLogsResponseProto
 
Protocol used to journal edits to a JournalNode.
 
 
 
Protobuf type hadoop.hdfs.qjournal.RequestInfoProto
Protobuf type hadoop.hdfs.qjournal.RequestInfoProto
 
Protobuf type hadoop.hdfs.qjournal.SegmentStateProto
Protobuf type hadoop.hdfs.qjournal.SegmentStateProto
 
startLogSegment()
startLogSegment()
 
Protobuf type hadoop.hdfs.qjournal.StartLogSegmentResponseProto
Protobuf type hadoop.hdfs.qjournal.StartLogSegmentResponseProto
 
Implementation for protobuf service that forwards requests received on JournalProtocolPB to the JournalProtocol server implementation.
This class is the client side translator to translate the requests made on JournalProtocol interfaces to the RPC server implementing JournalProtocolPB.
Gets the current status of disk balancer command.
A JournalManager that writes to a set of remote JournalNodes, requiring a quorum of nodes to ack each write.
Quota types.
Counters for quota counts.
 
 
Counters for namespace, storage space and storage type space quota and usage.
 
An implementation of RamDiskReplicaTracker that uses an LRU eviction scheme.
 
A ReadOnlyList is a unmodifiable list, which supports read-only operations.
Utilities for ReadOnlyList
A data structure to store the blocks in an incremental block report.
 
Receiver
This class is used on the server side.
This is a server side utility class that handles common logic to to parameter reconfiguration.
Exception indicating that a replica is already being recovery.
Class for handling re-encrypt EDEK operations.
Class for finalizing re-encrypt EDEK operations, by updating file xattrs with edeks returned from reencryption.
Class for de-duplication of instances.
Interface for the reference count holder
A RegisterCommand is an instruction to a datanode to register with the namenode.
 
An enumeration of logs available on a remote NameNode.
Information about a single remote NameNode
This represents block replicas which are stored in DataNode.
Exception indicating that the target block already exists and is not set to be recovered/overwritten.
This class represents replicas being written.
This class is to be used as a builder for ReplicaInfo objects.
Fast and accurate class to tell how much space HDFS is using.
This class includes a replica being actively written and the reference to the fs volume where this replica is located.
This class is used by datanodes to maintain meta data of its replicas.
This defines the interface of a replica in Pipeline that's being written to
Contains the input streams for the data and checksum of a replica.
Contains the output streams for the data and checksum of a replica.
Replica recovery information.
This interface defines the methods to get status pertaining to blocks of type BlockType.CONTIGUOUS in FSNamesystem of a NameNode.
This class represents replicas that are under block recovery It has a recovery id that is equal to the generation stamp that the replica will be bumped to after recovery The recovery id is used to handle multiple concurrent block recoveries.
This class represents a replica that is waiting to be recovered.
ReportBadBlockAction is an instruction issued by {{BPOfferService}} to {{BPServiceActor}} to report bad block to namenode
Executes the report command.
 
Used for calculating file system space reserved for non-HDFS data.
Used for creating instances of ReservedSpaceCalculator.
Based on absolute number of reserved bytes.
Calculates absolute and percentage based reserved space and picks the one that will yield less reserved space.
Calculates absolute and percentage based reserved space and picks the one that will yield more reserved space.
Based on percentage of total capacity in the storage.
Exception related to rolling upgrade.
A class for exposing a rolling window view on the event that occur over time.
A class to manage the set of RollingWindows.
Represents an operation within a TopWindow.
Represents a snapshot of the rolling window.
Represents a user who called an Op within a TopWindow.
Choose volumes with the same storage type in round-robin order.
Read-write lock interface for FSNamesystem.
This lock mode is used for FGL.
SafeMode related operations.
Negotiates SASL for DataTransferProtocol on behalf of a server.
 
Context for an ongoing SaveNamespace operation.
The Secondary NameNode is a helper to the primary NameNode.
JMX information of the secondary NameNode
Utility class to start a datanode in a secure cluster, first obtaining privileged resources before main startup and handing them to the datanode.
Stash necessary resources needed for datanode operation in a secure env.
Generate the next valid block group ID by incrementing the maximum block group ID allocated so far, with the first 2^10 block group IDs reserved.
Generate the next valid block ID by incrementing the maximum block ID allocated so far, starting at 2^30+1.
Manage name-to-serial-number maps for various string tables.
 
Map object to serial number.
Base class for a server command.
Manages client short-circuit memory segments on the DataNode.
 
 
 
This class aggregates information from SlowDiskReports received via heartbeats.
This structure is a thin wrapper over disk latencies.
Disabled tracker for slow peers.
This class aggregates information from SlowPeerReports received via heartbeats.
Snapshot of a sub-tree in the namesystem.
The root directory of the snapshot.
 
A tool used to get the difference report between two snapshots, or between a snapshot and the current status of a directory.
Snapshot related exception.
A helper class defining static methods for reading/writing snapshot related information from/to FSImage.
A reference map for fsimage serialization.
SnapshotInfo maintains information for a snapshot
 
Manage snapshottable directories and their snapshots.
This is an interface used to retrieve statistic information related to snapshots
An interface for SPSService, which exposes life cycle and processing APIs.
Thread which runs inside the NN when it's in Standby state, periodically waking up to take a checkpoint of the namespace.
Namenode standby state.
StartupProgress is used in various parts of the namenode codebase to indicate startup progress.
Allows a caller to increment a counter for tracking progress.
Links StartupProgress to a MetricsSource to expose its information via JMX.
Servlet that provides a JSON representation of the namenode's current startup progress.
StartupProgressView is an immutable, consistent, read-only view of namenode startup progress.
StatisticsEditsVisitor implements text version of EditsVisitor that aggregates counts of op codes processed
Indicates run status of a Phase.
A step in the plan.
A step performed by the namenode during a Phase of startup.
Indicates a particular type of Step.
Storage information file.
Interface for classes which need to have the user confirm their formatting during NameNode -format and other similar operations.
One of the storage directories.
An interface to denote storage directory type Implementations can define a type for storage directory by implementing this interface.
 
Block report for a Datanode storage
Interface which implementations of JournalManager can use to report errors on underlying storage directories.
Common class for storage information.
Encapsulates the URI and storage medium that together describe a storage directory.
A utility class that encapsulates checking storage locations during DataNode startup.
This class implements block storage policy operations.
Setting storagePolicy on a file after the file write will only update the new storage policy type in Namespace, but physical block storage movement will not happen until user runs "Mover Tool" explicitly for such files.
This class contains information of an attempted blocks and its last attempted or reported time stamp.
Maintains storage type map with the available datanodes in the cluster.
Keeps datanode with its respective set of supported storage types.
This manages satisfy storage policy invoked path ids and expose methods to process these path ids.
Aggregate the storage type information for a set of blocks
Report of block received and deleted per Datanode storage.
Statistics per StorageType.
Computes striped composite CRCs over reconstructed chunk CRCs.
Computes running MD5-of-CRC over reconstructed chunk CRCs.
StripedBlockChecksumReconstructor reconstruct one or more missed striped block in the striped block group, the minimum number of live striped blocks should be no less than data block number.
Stores striped block info that can be used for block reconstruction.
DtFetcher for SWebHdfsFileSystem using the base class HdfsDtFetcher impl.
 
A TeeOutputStream writes its output to multiple output streams.
This class is used for block maps stored as text files, with a specified delimiter.
Class specifying reader options for the TextFileRegionAliasMap.
This class is used as a reader for block maps which are stored as delimited text files.
This class is used as a writer for block maps which are stored as delimited text files.
Interface for Writer options.
Class specifying writer options for the TextFileRegionAliasMap.
An implementation of AsyncChecker that skips checking recently checked objects.
 
 
TokenVerifier<T extends org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier>
Interface to verify delegation tokens passed through WebHDFS.
An AuditLogger that sends logged data directly to the metrics systems.
This class is a common place for NNTop configuration.
The interface to the top metrics.
This class provides fetching a specified file from the NameNode.
 
Exception indicating that the replica is in an unexpected state
This exception is thrown when a node that has not previously registered is trying to access the name node.
This exception is thrown if resolving topology path for a node fails.
This exception is thrown when an operation is not supported.
The FileSystem path parameter.
Inject user information to http operations.
 
 
Defines the outcomes of running a disk check operation against a volume.
This interface specifies the policy for choosing volumes to store replicas.
Summarizes information about data volume failures on a DataNode.
VolumeScanner scans a single volume.
Used for injecting call backs in VolumeScanner and BlockScanner tests.
DtFetcher for WebHdfsFileSystem using the base class HdfsDtFetcher impl.
 
WebImageViewer loads a fsimage and exposes read-only WebHDFS API for its namespace.
 
Feature for extended attributes.
Class to pack XAttrs into byte[].
Note: this format is used both in-memory and on-disk.
There are four types of extended attributes <XAttr> defined by the following namespaces:
USER - extended user attributes: these can be assigned to files and directories to store arbitrary additional information.
XAttrStorage is used to read and set xattrs for an inode.
An XmlEditsVisitor walks over an EditLog structure and writes out an equivalent XML document that contains the EditLog's components.
An XmlImageVisitor walks over an fsimage structure and writes out an equivalent XML document that contains the fsimage's components.
General xml utilities.
Exception that reflects an invalid XML document.
Represents a bag of key-value pairs encountered during parsing an XML file.
Exception that reflects a string that cannot be unmangled.