All Classes and Interfaces

Class
Description
 
ProxyInfo to a NameNode.
Access time parameter.
Provide an OAuth2 access token to be used to authenticate http calls in WebHDFS.
Access tokens generally expire.
Indicates a failure manipulating an ACL.
AclPermission parameter.
AddBlockFlag provides hints for new block allocation and placement.
A response of add an ErasureCoding policy.
AllUsers parameter.
The exception that happens when you ask to create a file that already is being created, but is not closed yet.
Represents a peer that we communicate with by using a basic Socket that has no associated Channel.
A struct-like class for holding partial listings returned by the batched listing API.
A Block is a Hadoop FS primitive, identified by its block ID (a long).
Encapsulates various options related to how fine-grained data checksums are combined into block-level checksums.
Algorithms/types denoting how block-level checksums are computed using lower-level chunk checksums/CRCs.
Block Construction Stage
A block and the full path information to the block data file and the metadata file stored on the local file system.
BlockMetadataHeader manages metadata for data blocks on Datanodes.
This exception is thrown when a read encounters a block that has no locations associated with it.
Indicates a failure due to block pinning.
A BlockReader is responsible for reading a single block from a single datanode.
Utility class to create BlockReader implementations.
 
 
Profiles BlockReaderLocal short circuit read latencies when ShortCircuit read metrics is enabled through DfsClientConf.ShortCircuitConf.scrMetricsEnabled.
This class maintains a metric of rolling average latency for short circuit reads.
This is a wrapper around connection to datanode and understands checksum, offset etc.
Options that can be specified when manually triggering a block report.
 
Block size parameter.
A block storage policy describes how to select the storage types for the replicas of a block.
 
 
 
A block token selector for HDFS
Type of a block.
Buffer size parameter.
Manage byte array creation and release.
Configuration for ByteArrayManager.
OutputStream that writes into a ByteBuffer.
To support HTTP byte streams, a new connection to an HTTP server needs to be created each time.
This class wraps a URL and provides method to open connection.
Describes a path-based cache directive entry.
Describes a path-based cache directive.
A builder for creating new CacheDirectiveInfo instances.
Denotes a relative or absolute expiration time for a CacheDirective.
CacheDirectiveIterator is a remote iterator that iterates cache directives.
Describes a path-based cache directive.
 
Specifies semantics for CacheDirective operations.
Describes a Cache Pool entry.
CachePoolInfo describes a cache pool.
CachePoolIterator is a remote iterator that iterates cache pools.
CachePoolStats describes cache pool statistics.
 
The caching strategy we should use for an HDFS read or write operation.
 
This exception is thrown when the length of a LocatedBlock instance can not be obtained.
ClientContext contains context information for a client.
An client-datanode protocol for block recovery
 
This class is the client side translator to translate the requests made on ClientDatanodeProtocol interfaces to the RPC server implementing ClientDatanodeProtocolPB.
Global State Id context for the client.
 
A reference to a memory-mapped region used by an HDFS client.
 
This class forwards NN's ClientProtocol calls as RPC calls to the NN server while translating from the parameter types used in ClientProtocol to the new PB types.
 
 
ClientProtocol is used by user code via the DistributedFileSystem class to communicate with the NameNode.
Reader support for JSON-based datanode configuration, an alternative format to the exclude/include files configuration.
Writer support for JSON-based datanode configuration, an alternative format to the exclude/include files configuration.
The concat source paths parameter.
Obtain an access token via a a credential (provided through the Configuration) using the Client Credentials Grant workflow.
A FailoverProxyProvider implementation which allows one to configure multiple URIs to connect to during fail-over.
Supply a access token obtained via a refresh token (provided through the Configuration using the second half of the Authorization Code Grant workflow.
Provides an iterator interface for listCorruptFileBlocks.
Contains a list of paths corresponding to corrupt files and a cookie used for iterative calls to NameNode.listCorruptFileBlocks.
Exception object that is thrown when the block metadata file is corrupt.
CreateEncryptionZoneFlag is used in HdfsAdmin.createEncryptionZone(Path, String, EnumSet) to indicate what should be done when creating an encryption zone.
CreateFlag enum.
Create Parent parameter.
Obtain an access token via the credential-based OAuth2 workflow.
A little struct class to contain all fields required to perform encryption of the DataTransferProtocol.
Creates a new DataEncryptionKey on demand.
The class describes the configured admin properties for a datanode.
This class represents the primary identifier for a Datanode.
This class extends the primary identifier of a Datanode with ephemeral state, eg usage information, current administrative state, and the network location that is communicated to clients.
 
Building the DataNodeInfo.
 
Locally available datanode information
Class captures information of a storage in Datanode.
The state of the storage.
Class captures information of a datanode and its storages.
A class that allows DataNode to communicate information about usage statistics/metrics to NameNode.
Builder class for DataNodeUsageReport.
This class is helper class to generate a live usage report by calculating the delta between current DataNode usage metrics and the usage metrics captured at the time of the last report.
Locally available datanode volume information.
Transfer data to/from datanode using a streaming protocol.
Static utilities for dealing with the protocol buffers used by the Data Transfer Protocol.
Utility methods implementing SASL negotiation for DataTransferProtocol.
Detect the dead nodes in advance, and share this information among all the DFSInputStreams in the same client.
Represents delegation token used for authentication.
A delegation token identifier that is specific to HDFS.
 
 
A delegation token that is specialized for HDFS
Http DELETE operation parameter.
Delete operations.
Destination path parameter.
DFSClient can connect to a Hadoop Filesystem and perform basic file tasks.
Deprecated.
use HdfsDataInputStream instead.
 
DFSClient configuration.
Configuration for short-circuit reads.
Used for injecting faults in DFSClient and DFSOutputStream tests.
DfsClientShm is a subclass of ShortCircuitShm which is used by the DfsClient.
Manages short-circuit memory segments for an HDFS client.
 
 
The client-side metrics for hedged read feature.
Stream for reading inotify events.
DFSInputStream provides bytes from a named file.
This storage statistics tracks how many times each DFS operation was issued.
This is for counting distributed file system operations.
DFSOutputStream creates files from a stream of bytes.
DFSPacket is used by DataStreamer and DFSOutputStream.
 
DFSStripedInputStream reads from striped block groups.
This class supports writing files in striped layout and erasure coded format.
 
A utility class as a container to put corrupted blocks, shared by client and datanode.
 
 
This class defines a partial listing of a directory to support iterative directory listing.
Keeps track of how much work has finished.
Helper class that reports how much work has has been done by the node.
A class that is used to report each work item that we are working on.
Various result values.
Implementation of the abstract FileSystem for the DFS system.
HdfsDataOutputStreamBuilder provides the HDFS-specific capabilities to write file on HDFS.
DoAs parameter for proxy user.
Represents a peer that we communicate with by using blocking I/O on a UNIX domain socket.
 
 
 
 
Get statistics pertaining to blocks of type BlockType.STRIPED in the filesystem.
A EC policy loading tool that loads user defined EC policies from XML file.
policy parameter.
Result of the verification whether the current cluster setup can support all enabled EC policies.
Represents a peer that we communicate with by using an encrypted communications medium.
A simple class for representing an encryption zone.
EncryptionZoneIterator is a remote iterator that iterates over encryption zones.
A policy about how to write/read/code an erasure coding file.
HDFS internal presentation of a ErasureCodingPolicy.
Value denotes the possible states of an ErasureCodingPolicy.
Events sent by the inotify system.
Sent when an existing file is opened for append.
 
Sent when a file is closed after append or create.
Sent when a new file is created (including overwrite).
 
 
 
Sent when there is an update to directory or file (none of the metadata tracked here applies to symlinks) that is not associated with another inotify event.
 
 
Sent when a file, directory, or symlink is renamed.
 
Sent when a file is truncated.
Sent when a file, directory, or symlink is deleted.
 
A batch of events that all happened on the same transaction ID.
Contains a list of event batches, the transaction ID in the edit log up to which we read to produce these events, and the first txid we observed when producing these events (the last of which is for the purpose of determining whether we have missed events due to edit deletion).
Exclude datanodes param
Identifies a Block uniquely across the block pools
An immutable key which identifies a block.
An ExternalBlockReader uses pluggable ReplicaAccessor objects to read from replicas.
FsAction Parameter
Deprecated.
ACLs, encryption, and erasure coding are managed on FileStatus.
Http GET operation parameter.
Get operations.
Group parameter.
This interface aims to decouple the proxy creation implementation that used in AbstractNNFailoverProxyProvider.
 
 
The public API for performing administrative functions on HDFS.
Wrapper for BlockLocation that also includes a LocatedBlock, allowing more detailed queries to the datanode about a block.
Client configuration properties
dfs.client.block.write configuration properties
 
These are deprecated config keys to client code.
dfs.client.failover configuration properties
dfs.client.hedged.read configuration properties
dfs.http.client configuration properties
dfs.client.mmap configuration properties
dfs.client.read configuration properties
 
dfs.client.retry configuration properties
dfs.client.short.circuit configuration properties
dfs.client.read.striped configuration properties
dfs.client.write configuration properties
 
 
Adds deprecated keys into the configuration.
 
 
Re-encrypt encryption zone actions.
 
 
This enum wraps above Storage Policy ID and name.
Storage policy satisfier service modes.
Upgrade actions.
The Hdfs implementation of FSDataInputStream.
The Hdfs implementation of FSDataOutputStream.
 
HDFS metadata for an entity in the filesystem.
Builder class for HdfsFileStatus instances.
Set of features potentially active on an instance.
Utility class for key provider related methods in hdfs client package.
HDFS metadata for an entity in the filesystem with locations.
HDFS metadata for an entity in the filesystem without locations.
A partial listing returned by the batched listing API.
Opaque handle to an entity in HDFS.
The public utility API for HDFS.
Http operation parameter.
Http operation interface.
Expects HTTP response 307 "Temporary Redirect".
Http operation types
A ConfiguredFailoverProxyProvider implementation used to connect to an InMemoryAliasMap.
Access token verification failed.
Encryption key verification failed.
A little struct class to wrap an InputStream and an OutputStream.
 
A NNFailoverProxyProvider implementation which works on IP failover setup.
Utility methods used in WebHDFS/HttpFS JSON conversion.
Use UserGroupInformation as a fallback authenticator if the server does not use Kerberos SPNEGO HTTP authentication.
 
Class to contain Lastblock and HdfsFileStatus for the Append operation
Used by DFSClient for renewing file-being-written leases on the namenode.
Length parameter.
Associates a block with the Datanodes that contain its replicas and other block metadata (E.g. the file offset associated with this block, whether it is corrupt, a location is cached in memory, security token, etc).
Collection of blocks with their locations and the file length.
Periodically refresh the underlying cached LocatedBlocks for eligible registered DFSInputStreams.
LocatedBlock with striped block support.
Bit format in a long.
 
 
Modification time parameter.
Create proxy objects with ClientProtocol and HAServiceProtocol to communicate with a remote NN.
Wrapper for a client proxy as well as its associated service ID.
The name space quota parameter for directory.
NewLength parameter.
Represents a peer that we communicate with by using non-blocking I/O on a Socket.
Thrown when no EC policy is set explicitly on the directory.
Overwrite parameter.
The file has not finished being written to enough datanodes yet.
 
Configure a connection to use OAuth2 authentication.
Sundry constants relating to OAuth2 within WebHDFS.
A FailoverProxyProvider implementation that supports reading from observer namenode(s).
Extends ObserverReadProxyProvider to support NameNode IP failover.
Offset parameter.
The old snapshot name parameter for renameSnapshot operation.
Operation
An open file entry for use by DFSAdmin commands.
OpenFilesIterator is a remote iterator that iterates over the open files list managed by the NameNode.
Open file types to filter the results.
Outlier detection metrics - median, median absolute deviation, upper latency limit, actual latency etc.
Overwrite parameter.
Owner parameter.
Header data for each packet that goes through the read/write pipelines.
Class to handle reading packets one-at-a-time from the wire.
Param<T,D extends org.apache.hadoop.hdfs.web.resources.Param.Domain<T>>
Base class of parameters.
Utilities for converting protobuf classes to and from hdfs-client side implementation classes and other helper utilities to help in dealing with protobuf.
Represents a connection to a peer.
A cache of input stream sockets to Data Node.
Permission parameter, use a Short to represent a FsPermission.
Pipeline Acknowledgment
 
 
Http POST operation parameter.
Post operations.
ProvidedStorageLocation is a location in an external storage system containing the data for a block (~Replica).
Http POST operation parameter.
Put operations.
 
This exception is thrown when modification to HDFS results in violation of a directory quota.
Marker interface used to annotate methods that are readonly.
A utility class that maintains statistics for reading.
ReconfigurationProtocol is used by HDFS admin to reload configuration for NN/DN without restarting them.
Protocol that clients use to communicate with the NN/DN to do reconfiguration on the fly.
This class is the client side translator to translate the requests made on ReconfigurationProtocol interfaces to the RPC server implementing ReconfigurationProtocolPB.
This is a client side utility class that handles common logic to to parameter reconfiguration.
Recursive parameter.
A class representing information about re-encrypting encryption zones.
ReencryptionStatusIterator is a remote iterator that iterates over the reencryption status of encryption zones.
 
Rename option set parameter.
Renewer parameter.
The setting of replace-datanode-on-failure feature.
The replacement policies
The public API for ReplicaAccessor objects.
The public API for creating a new ReplicaAccessor.
Exception indicating that DataNode does not have a replica that matches the target block.
Get statistics pertaining to blocks of type BlockType.CONTIGUOUS in the filesystem.
Replication parameter.
A FailoverProxyProvider implementation that technically does not "failover" per-se.
 
Rolling upgrade information
 
Rolling upgrade status
A FailoverProxyProvider implementation to support automatic msync-ing when using routers.
A FailoverProxyProvider implementation to support automatic msync-ing when using routers.
This exception is thrown when the name node is in safe mode.
Negotiates SASL for DataTransferProtocol on behalf of a client.
 
Sender
The ShortCircuitCache tracks things which the client needs to access HDFS block files via short-circuit.
 
 
A ShortCircuitReplica object contains file descriptors for a block that we are reading via short-circuit local reads.
 
A shared memory segment used to implement short-circuit reads.
Identifies a DfsClientShm.
Uniquely identifies a slot.
A class that allows a DataNode to communicate information about all its disks that appear to be slow.
Lists the types of operations on which disk latencies are measured.
A class that allows a DataNode to communicate information about all its peer DataNodes that appear to be slow.
Snapshot access related exception.
resuming index of snapshotDiffReportListing operation.
This class represents to end users the difference between two snapshots of the same directory, or the difference between a snapshot of the directory and its current state.
Representing the full path and diff type of a file/directory where changes have happened.
Records the stats related to Snapshot diff operation.
Types of the difference, which include CREATE, MODIFY, DELETE, and RENAME.
This class represents to end users the difference between two snapshots of the same directory, or the difference between a snapshot of the directory and its current state.
This class represents to the difference between two snapshots of the same directory, or the difference between a snapshot of the directory and its current state.
Representing the full path and diff type of a file/directory where changes have happened.
The snapshot startPath parameter used by snapshotDiffReportListing.
The snapshot name parameter for createSnapshot and deleteSnapshot operation.
Metadata about a snapshottable directory.
Metadata about a snapshottable directory
 
Configure a connection to use SSL authentication.
Used during batched ListStatus operations.
policy parameter.
Utilization report for a Datanode storage
The storage space quota parameter for directory.
storage type parameter.
Striped block info that can be sent elsewhere to do block group level things, like checksum, and etc.
When accessing a file in striped layout, operations on logical byte ranges in the file need to be mapped to physical byte ranges on block files stored on DataNodes.
Given a requested byte range on a striped block group, an AlignedStripe represents an inclusive StripedBlockUtil.VerticalRange that is aligned with both the byte range and boundaries of all internal blocks.
Struct holding the read statistics.
A utility to manage ByteBuffer slices for a reader.
Used to indicate the buffered data's range in the block group.
Cell is the unit of encoding used in DFSStripedOutputStream.
Indicates the coverage of an StripedBlockUtil.AlignedStripe on an internal block, and the state of the chunk in the context of the read request.
This class represents result from a striped read request.
A simple utility class representing an arbitrary vertical inclusive range starting at StripedBlockUtil.VerticalRange.offsetInBlock and lasting for StripedBlockUtil.VerticalRange.spanInBlock bytes in an internal block.
This class extends DataStreamer to support writing striped blocks to datanodes.
AbstractFileSystem implementation for HDFS over the web (secure).
 
The set of built-in erasure coding policies.
Represents delegation token parameter as method arguments.
Class used to indicate whether a channel is trusted or not.
Thrown when an unknown cipher suite is encountered.
 
Unmasked permission parameter, use a Short to represent a FsPermission.
Thrown when a symbolic link is encountered in a path.
Utilities for handling URLs
User parameter.
The ViewDistributedFileSystem is an extended class to DistributedFileSystem with additional mounting functionality.
AbstractFileSystem implementation for HDFS over the web.
 
A FileSystem for HDFS over the web.
A NNFailoverProxyProvider implementation which wrapps old implementations directly implementing the FailoverProxyProvider interface.
XAttr is the POSIX Extended Attribute model similar to that found in traditional Operating Systems.
 
 
 
 
 
The exception that happens when you ask to get a non existing XAttr.
 
 
A class representing information about re-encryption of an encryption zone.
State of re-encryption.