Package org.apache.hadoop.hdfs
Class DFSClient
java.lang.Object
org.apache.hadoop.hdfs.DFSClient
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.hadoop.crypto.key.KeyProviderTokenIssuer,DataEncryptionKeyFactory,RemotePeerFactory,org.apache.hadoop.security.token.DelegationTokenIssuer
@Private
public class DFSClient
extends Object
implements Closeable, RemotePeerFactory, DataEncryptionKeyFactory, org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
DFSClient can connect to a Hadoop Filesystem and
perform basic file tasks. It uses the ClientProtocol
to communicate with a NameNode daemon, and connects
directly to DataNodes to read/write block data.
Hadoop DFS users should obtain an instance of
DistributedFileSystem, which uses DFSClient to handle
filesystem tasks.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classDeprecated.static class -
Field Summary
FieldsFields inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
TOKEN_LOG -
Constructor Summary
ConstructorsConstructorDescriptionDFSClient(InetSocketAddress address, org.apache.hadoop.conf.Configuration conf) Same as this(nameNodeUri, conf, null);DFSClient(URI nameNodeUri, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats) Same as this(nameNodeUri, null, conf, stats);DFSClient(URI nameNodeUri, ClientProtocol rpcNamenode, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats) Create a new DFSClient connected to the given nameNodeUri or rpcNamenode.DFSClient(org.apache.hadoop.conf.Configuration conf) Deprecated.Deprecated at 0.21 -
Method Summary
Modifier and TypeMethodDescriptionlongaddCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) voidaddCachePool(CachePoolInfo info) addErasureCodingPolicies(ErasureCodingPolicy[] policies) voidaddLocatedBlocksRefresh(DFSInputStream dfsInputStream) Adds theDFSInputStreamto theLocatedBlocksRefresher, so that the underlyingLocatedBlocksis periodically refreshed.voidaddNodeToDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Add given datanode in DeadNodeDetector.voidallowSnapshot(String snapshotRoot) Allow snapshot on a directory.append(String src, int buffersize, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.FileSystem.Statistics statistics) Append to an existing HDFS file.append(String src, int buffersize, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.FileSystem.Statistics statistics, InetSocketAddress[] favoredNodes) Append to an existing HDFS file.batchedListPaths(String[] srcs, byte[] startAfter, boolean needLocation) Get a batched listing for the indicated directoriesvoidcancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Deprecated.Use Token.cancel instead.voidcheckAccess(String src, org.apache.hadoop.fs.permission.FsAction mode) voidvoidclose()Close the file system, abandoning all of the leases and files being created and close connections to the namenode.voidcloseAllFilesBeingWritten(boolean abort) Close/abort all files being written.voidcloseOutputStreams(boolean abort) Close all open streams, abandoning all of the leases and files being created.voidMove blocks from src to trg and delete src SeeClientProtocol.concat(java.lang.String, java.lang.String[]).protected IOStreamPairconnectToDN(DatanodeInfo dn, int timeout, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) Callcreate(String, boolean, short, long, Progressable)with defaultreplicationandblockSizeand nullprogress.Callcreate(String, boolean, short, long, Progressable)with nullprogress.create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) Callcreate(String, boolean, short, long, Progressable, int)with default bufferSize.create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize) Callcreate(String, FsPermission, EnumSet, short, long, Progressable, int, ChecksumOpt)with defaultpermissionFsPermission.getFileDefault().create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes) Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks.create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes, String ecPolicyName) Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt, InetSocketAddress[])with the addition of ecPolicyName that is used to specify a specific erasure coding policy instead of inheriting any policy from this new file's parent directory.create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes, String ecPolicyName, String storagePolicy) Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt, InetSocketAddress[], String)with the storagePolicy that is used to specify a specific storage policy instead of inheriting any policy from this new file's parent directory.create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) Callcreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt)withcreateParentset to true.voidcreateEncryptionZone(String src, String keyName) createSnapshot(String snapshotRoot, String snapshotName) Create one snapshot.voidcreateSymlink(String target, String link, boolean createParent) Creates a symbolic link.Wraps the stream in a CryptoInputStream if the underlying file is encrypted.createWrappedOutputStream(DFSOutputStream dfsos, org.apache.hadoop.fs.FileSystem.Statistics statistics) Wraps the stream in a CryptoOutputStream if the underlying file is encrypted.createWrappedOutputStream(DFSOutputStream dfsos, org.apache.hadoop.fs.FileSystem.Statistics statistics, long startPos) Wraps the stream in a CryptoOutputStream if the underlying file is encrypted.booleanDeprecated.booleandelete file or directory.voiddeleteSnapshot(String snapshotRoot, String snapshotName) Delete a snapshot of a snapshottable directory.voiddisableErasureCodingPolicy(String ecPolicyName) voiddisallowSnapshot(String snapshotRoot) Disallow snapshot on a directory.voidenableErasureCodingPolicy(String ecPolicyName) booleanImplemented using getFileInfo(src)voidorg.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) protected LocatedBlocksgetBlockLocations(String src, long length) org.apache.hadoop.fs.BlockLocation[]getBlockLocations(String src, long start, long length) Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region.longlongReturns number of bytes that reside in Blocks with future generation stamps.Get a canonical token service name for this client's tokens.getConf()longReturns count of blocks with at least one replica marked corrupt.Obtain DeadNodeDetector of the current client.getDeadNodes(DFSInputStream dfsInputStream) If deadNodeDetectionEnabled is true, return the dead nodes that detected by all the DFSInputStreams in the same client.org.apache.hadoop.security.token.Token<?>getDelegationToken(String renewer) org.apache.hadoop.security.token.Token<DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) org.apache.hadoop.fs.FsStatusgetECTopologyResultForPolicies(String... policyNames) org.apache.hadoop.fs.PathgetEnclosingRoot(String src) Get the erasure coding policy information for the specified pathgetEZForPath(String src) org.apache.hadoop.fs.MD5MD5CRC32FileChecksumgetFileChecksum(String src, long length) Get the checksum of the whole file or a range of the file.org.apache.hadoop.fs.FileChecksumgetFileChecksumWithCombineMode(String src, long length) Get the checksum of the whole file or a range of the file.getFileInfo(String src) Get the file info for a specific file or directory.getFileLinkInfo(String src) Get the file info for a specific file or directory.org.apache.hadoop.ha.HAServiceProtocol.HAServiceStateAn unblocking call to get the HA service state of NameNode.getInotifyEventStream(long lastReadTxid) org.apache.hadoop.crypto.key.KeyProviderReturn the lease renewer instance.getLinkTarget(String path) Resolve the *first* symlink, if any, in the path.Obtain LocatedBlocksRefresher of the current client.getLocatedBlocks(String src, long start) Get locations of the blocks of the specified file `src` from offset `start` within the prefetch size which is related to parameter `dfs.client.read.prefetch.size`.getLocatedBlocks(String src, long start, long length) This is just a wrapper around callGetBlockLocations, but non-static so that we can stub it out for tests.getLocatedFileInfo(String src, boolean needBlockToken) Get the file info for a specific file or directory.longReturns aggregated count of blocks with less redundancy.longReturns count of blocks with no good replicas left.longReturns count of blocks with replication factor 1 and have lost the only replica.Get the namenode associated with this DFSClient objectintlongReturns count of blocks pending on deletion.longReturns the SaslDataTransferClient configured for this DFSClient.org.apache.hadoop.fs.FsServerDefaultsGet server default values for a number of configuration params.getSnapshotDiffReport(String snapshotDir, String fromSnapshot, String toSnapshot) Get the difference between two snapshots, or between a snapshot and the current tree of a directory.getSnapshotDiffReportListing(String snapshotDir, String fromSnapshot, String toSnapshot, byte[] startPath, int index) Get the difference between two snapshots of a directory iteratively.getSnapshotListing(String snapshotRoot) Get listing of all the snapshots for a snapshottable directory.Get all the current snapshottable directories.static longgetStateAtIndex(long[] states, int index) getStoragePolicy(String path) byte[]protected org.apache.hadoop.util.DataChecksum.TypeInfer the checksum type for a replica by sending an OP_READ_BLOCK for the first byte of that replica.booleanbooleanisDeadNode(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) If deadNodeDetectionEnabled is true, judgement based on whether this datanode is included or not in DeadNodeDetector.booleanisFileClosed(String src) Close status of a filebooleanIs file-being-written map empty?org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry>org.apache.hadoop.fs.RemoteIterator<CachePoolEntry>listCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.RemoteIterator<EncryptionZone>org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>Deprecated.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(String path) Get a remote iterator to the open files list by path, managed by NameNode.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) Get a remote iterator to the open files list by type, managed by NameNode.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) Get a remote iterator to the open files list by type and path, managed by NameNode.Get a partial listing of the indicated directory No block locations need to be fetchedGet a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directoryorg.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus>listXAttrs(String src) voidDumps DFS data structures into specified file.booleanDeprecated.booleanCreate a directory (or hierarchy of directories) with the given name and permission.voidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidmodifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) voidmodifyCachePool(CachePoolInfo info) voidmsync()A blocking call to wait for Observer NameNode state ID to reach to the current client state ID.newConnectedPeer(InetSocketAddress addr, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId) Creates a new DataEncryptionKey.Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places.open(String src, int buffersize, boolean verifyChecksum, org.apache.hadoop.fs.FileSystem.Statistics stats) Deprecated.Useopen(String, int, boolean)instead.open(HdfsPathHandle fd, int buffersize, boolean verifyChecksum) Create an input stream from theHdfsPathHandleif the constraints encoded fromDistributedFileSystem.createPathHandle(FileStatus, Options.HandleOpt...)are satisfied.primitiveCreate(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int, ChecksumOpt)except that the permission is absolute (ie has already been masked with umask.booleanprimitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission) Same {mkdirs(String, FsPermission, boolean)except that the permissions has already been masked against umask.booleanprimitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, boolean createParent) Same {mkdirs(String, FsPermission, boolean)except that the permissions has already been masked against umask.voidputFileBeingWritten(String key, DFSOutputStream out) Put a file.voidreencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) voidRefresh the hosts and exclude files.voidvoidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) voidremoveCachePool(String poolName) voidremoveDefaultAcl(String src) voidremoveErasureCodingPolicy(String ecPolicyName) voidRemove a file.voidremoveLocatedBlocksRefresh(DFSInputStream dfsInputStream) Removes theDFSInputStreamfrom theLocatedBlocksRefresher, so that the underlyingLocatedBlocksis no longer periodically refreshed.voidremoveNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Remove given datanode from DeadNodeDetector.voidremoveNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, LocatedBlocks locatedBlocks) Remove datanodes that given block placed on from DeadNodeDetector.voidremoveXAttr(String src, String name) booleanDeprecated.Userename(String, String, Options.Rename...)instead.voidRename file or directory.voidrenameSnapshot(String snapshotDir, String snapshotOldName, String snapshotNewName) Rename a snapshot.longrenewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Deprecated.Use Token.renew instead.booleanRenew leases.voidreportBadBlocks(LocatedBlock[] blocks) Report corrupt blocks that were discovered by the client.voidSatisfy storage policy for an existing file/directory.voidvoidsetBalancerBandwidth(long bandwidth) Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.static voidsetDisabledStopDeadNodeDetectorThreadForTest(boolean disabledStopDeadNodeDetectorThreadForTest) voidsetErasureCodingPolicy(String src, String ecPolicyName) voidsetKeyProvider(org.apache.hadoop.crypto.key.KeyProvider provider) voidSet file or directory owner.voidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) Set permissions to a file or directory.booleansetReplication(String src, short replication) Set replication for an existing file.booleanEnter, leave or get safe mode.booleansetSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) Enter, leave or get safe mode.voidsetStoragePolicy(String src, String policyName) Set storage policy for an existing file/directoryvoidset the modification and access time of a file.voidtoString()booleanTruncate a file to an indicated size SeeClientProtocol.truncate(java.lang.String, long, java.lang.String).voidvoidunsetStoragePolicy(String src) Unset storage policy set for a given file/directory.booleanMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
addDelegationTokens, getAdditionalTokenIssuers
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG
-
-
Constructor Details
-
DFSClient
Deprecated.Deprecated at 0.21Same as this(NameNode.getNNAddress(conf), conf);- Throws:
IOException- See Also:
-
DFSClient
public DFSClient(InetSocketAddress address, org.apache.hadoop.conf.Configuration conf) throws IOException - Throws:
IOException
-
DFSClient
Same as this(nameNodeUri, conf, null);- Throws:
IOException- See Also:
-
DFSClient
public DFSClient(URI nameNodeUri, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats) throws IOException Same as this(nameNodeUri, null, conf, stats); -
DFSClient
@VisibleForTesting public DFSClient(URI nameNodeUri, ClientProtocol rpcNamenode, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem.Statistics stats) throws IOException Create a new DFSClient connected to the given nameNodeUri or rpcNamenode. If HA is enabled and a positive value is set forHdfsClientConfigKeys.DFS_CLIENT_TEST_DROP_NAMENODE_RESPONSE_NUM_KEYin the configuration, the DFSClient will useLossyRetryInvocationHandleras its RetryInvocationHandler. Otherwise one of nameNodeUri or rpcNamenode must be null.- Throws:
IOException
-
-
Method Details
-
setDisabledStopDeadNodeDetectorThreadForTest
@VisibleForTesting public static void setDisabledStopDeadNodeDetectorThreadForTest(boolean disabledStopDeadNodeDetectorThreadForTest) -
getConf
-
getClientName
-
getLeaseRenewer
Return the lease renewer instance. The renewer thread won't start until the first output stream is created. The same instance will be returned until all output streams are closed. -
putFileBeingWritten
Put a file. Only called from LeaseRenewer, where proper locking is enforced to consistently update its local dfsclients array and client's filesBeingWritten map. -
removeFileBeingWritten
Remove a file. Only called from LeaseRenewer. -
isFilesBeingWrittenEmpty
public boolean isFilesBeingWrittenEmpty()Is file-being-written map empty? -
isClientRunning
public boolean isClientRunning()- Returns:
- true if the client is running
-
getNumOfFilesBeingWritten
@VisibleForTesting public int getNumOfFilesBeingWritten() -
renewLease
Renew leases.- Returns:
- true if lease was renewed. May return false if this client has been closed or has no files open.
- Throws:
IOException
-
closeAllFilesBeingWritten
public void closeAllFilesBeingWritten(boolean abort) Close/abort all files being written. -
close
Close the file system, abandoning all of the leases and files being created and close connections to the namenode.- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Throws:
IOException
-
closeOutputStreams
public void closeOutputStreams(boolean abort) Close all open streams, abandoning all of the leases and files being created.- Parameters:
abort- whether streams should be gracefully closed
-
getBlockSize
- Throws:
IOException- See Also:
-
getServerDefaults
Get server default values for a number of configuration params.- Throws:
IOException- See Also:
-
getCanonicalServiceName
Get a canonical token service name for this client's tokens. Null should be returned if the client is not using tokens.- Specified by:
getCanonicalServiceNamein interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Returns:
- the token service for the client
-
getDelegationToken
public org.apache.hadoop.security.token.Token<?> getDelegationToken(String renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Throws:
IOException
-
getDelegationToken
public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException - Throws:
IOException- See Also:
-
renewDelegationToken
@Deprecated public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Deprecated.Use Token.renew instead.Renew a delegation token- Parameters:
token- the token to renew- Returns:
- the new expiration time
- Throws:
IOException
-
cancelDelegationToken
@Deprecated public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Deprecated.Use Token.cancel instead.Cancel a delegation token- Parameters:
token- the token to cancel- Throws:
IOException
-
reportBadBlocks
Report corrupt blocks that were discovered by the client.- Throws:
IOException- See Also:
-
getRefreshReadBlkLocationsInterval
public long getRefreshReadBlkLocationsInterval() -
getLocatedBlocks
Get locations of the blocks of the specified file `src` from offset `start` within the prefetch size which is related to parameter `dfs.client.read.prefetch.size`. DataNode locations for each block are sorted by the proximity to the client. Please note that the prefetch size is not equal file length generally.- Parameters:
src- the file path.start- starting offset.- Returns:
- LocatedBlocks
- Throws:
IOException
-
getLocatedBlocks
@VisibleForTesting public LocatedBlocks getLocatedBlocks(String src, long start, long length) throws IOException This is just a wrapper around callGetBlockLocations, but non-static so that we can stub it out for tests.- Throws:
IOException
-
getBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getBlockLocations(String src, long start, long length) throws IOException Get block location info about file getBlockLocations() returns a list of hostnames that store data for a specific file region. It returns a set of hostnames for every block within the indicated region. This function is very useful when writing code that considers data-placement when performing operations. For example, the MapReduce system tries to schedule tasks on the same machines as the data-block the task processes. Please refer toFileSystem.getFileBlockLocations(FileStatus, long, long)for more details.- Throws:
IOException
-
createWrappedInputStream
Wraps the stream in a CryptoInputStream if the underlying file is encrypted.- Throws:
IOException
-
createWrappedOutputStream
public HdfsDataOutputStream createWrappedOutputStream(DFSOutputStream dfsos, org.apache.hadoop.fs.FileSystem.Statistics statistics) throws IOException Wraps the stream in a CryptoOutputStream if the underlying file is encrypted.- Throws:
IOException
-
createWrappedOutputStream
public HdfsDataOutputStream createWrappedOutputStream(DFSOutputStream dfsos, org.apache.hadoop.fs.FileSystem.Statistics statistics, long startPos) throws IOException Wraps the stream in a CryptoOutputStream if the underlying file is encrypted.- Throws:
IOException
-
open
- Throws:
IOException
-
open
@Deprecated public DFSInputStream open(String src, int buffersize, boolean verifyChecksum, org.apache.hadoop.fs.FileSystem.Statistics stats) throws IOException Deprecated.Useopen(String, int, boolean)instead.Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.- Throws:
IOException
-
open
Create an input stream that obtains a nodelist from the namenode, and then reads from all the right places. Creates inner subclass of InputStream that does the right out-of-band work.- Throws:
IOException
-
open
public DFSInputStream open(HdfsPathHandle fd, int buffersize, boolean verifyChecksum) throws IOException Create an input stream from theHdfsPathHandleif the constraints encoded fromDistributedFileSystem.createPathHandle(FileStatus, Options.HandleOpt...)are satisfied. Note that HDFS does not ensure that these constraints remain invariant for the life of the stream. It only checks that they still held when the stream was opened.- Parameters:
fd- Handle to an entity in HDFS, with constraintsbuffersize- ignoredverifyChecksum- Verify checksums before returning data to client- Returns:
- Data from the referent of the
HdfsPathHandle. - Throws:
IOException- On I/O error
-
getNamenode
Get the namenode associated with this DFSClient object- Returns:
- the namenode associated with this DFSClient object
-
create
Callcreate(String, boolean, short, long, Progressable)with defaultreplicationandblockSizeand nullprogress.- Throws:
IOException
-
create
public OutputStream create(String src, boolean overwrite, org.apache.hadoop.util.Progressable progress) throws IOException - Throws:
IOException
-
create
public OutputStream create(String src, boolean overwrite, short replication, long blockSize) throws IOException Callcreate(String, boolean, short, long, Progressable)with nullprogress.- Throws:
IOException
-
create
public OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException Callcreate(String, boolean, short, long, Progressable, int)with default bufferSize.- Throws:
IOException
-
create
public OutputStream create(String src, boolean overwrite, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize) throws IOException Callcreate(String, FsPermission, EnumSet, short, long, Progressable, int, ChecksumOpt)with defaultpermissionFsPermission.getFileDefault().- Parameters:
src- File nameoverwrite- overwrite an existing file if truereplication- replication factor for the fileblockSize- maximum block sizeprogress- interface for reporting client progressbuffersize- underlying buffersize- Returns:
- output stream
- Throws:
IOException
-
create
public DFSOutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException Callcreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt)withcreateParentset to true.- Throws:
IOException
-
create
public DFSOutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException Create a new dfs file with the specified block replication with write-progress reporting and return an output stream for writing into the file.- Parameters:
src- File namepermission- The permission of the directory being created. If null, use default permissionFsPermission.getFileDefault()flag- indicates create a new file or create/overwrite an existing file or append to an existing filecreateParent- create missing parent directory if truereplication- block replicationblockSize- maximum block sizeprogress- interface for reporting client progressbuffersize- underlying buffer sizechecksumOpt- checksum options- Returns:
- output stream
- Throws:
IOException- See Also:
-
create
public DFSOutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes) throws IOException Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks. The favored nodes hint is not persisted in HDFS. Hence it may be honored at the creation time only. HDFS could move the blocks during balancing or replication, to move the blocks from favored nodes. A value of null means no favored nodes for this create- Throws:
IOException
-
create
public DFSOutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes, String ecPolicyName) throws IOException Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt, InetSocketAddress[])with the addition of ecPolicyName that is used to specify a specific erasure coding policy instead of inheriting any policy from this new file's parent directory. This policy will be persisted in HDFS. A value of null means inheriting parent groups' whatever policy.- Throws:
IOException
-
create
public DFSOutputStream create(String src, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt, InetSocketAddress[] favoredNodes, String ecPolicyName, String storagePolicy) throws IOException Same ascreate(String, FsPermission, EnumSet, boolean, short, long, Progressable, int, ChecksumOpt, InetSocketAddress[], String)with the storagePolicy that is used to specify a specific storage policy instead of inheriting any policy from this new file's parent directory. This policy will be persisted in HDFS. A value of null means inheriting parent groups' whatever policy.- Throws:
IOException
-
primitiveCreate
public DFSOutputStream primitiveCreate(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, int buffersize, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException Same as {create(String, FsPermission, EnumSet, short, long, Progressable, int, ChecksumOpt)except that the permission is absolute (ie has already been masked with umask.- Throws:
IOException
-
createSymlink
Creates a symbolic link. -
getLinkTarget
Resolve the *first* symlink, if any, in the path.- Throws:
IOException- See Also:
-
append
public HdfsDataOutputStream append(String src, int buffersize, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.FileSystem.Statistics statistics) throws IOException Append to an existing HDFS file.- Parameters:
src- file namebuffersize- buffer sizeflag- indicates whether to append data to a new block instead of the last blockprogress- for reporting write-progress; null is acceptable.statistics- file system statistics; null is acceptable.- Returns:
- an output stream for writing into the file
- Throws:
IOException- See Also:
-
append
public HdfsDataOutputStream append(String src, int buffersize, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.FileSystem.Statistics statistics, InetSocketAddress[] favoredNodes) throws IOException Append to an existing HDFS file.- Parameters:
src- file namebuffersize- buffer sizeflag- indicates whether to append data to a new block instead of the last blockprogress- for reporting write-progress; null is acceptable.statistics- file system statistics; null is acceptable.favoredNodes- FavoredNodes for new blocks- Returns:
- an output stream for writing into the file
- Throws:
IOException- See Also:
-
setReplication
Set replication for an existing file.- Parameters:
src- file namereplication- replication to set the file to- Throws:
IOException- See Also:
-
setStoragePolicy
Set storage policy for an existing file/directory- Parameters:
src- file/directory namepolicyName- name of the storage policy- Throws:
IOException
-
unsetStoragePolicy
Unset storage policy set for a given file/directory.- Parameters:
src- file/directory name- Throws:
IOException
-
getStoragePolicy
- Parameters:
path- file/directory name- Returns:
- Get the storage policy for specified path
- Throws:
IOException
-
getStoragePolicies
- Returns:
- All the existing storage policies
- Throws:
IOException
-
rename
Deprecated.Userename(String, String, Options.Rename...)instead.Rename file or directory.- Throws:
IOException- See Also:
-
concat
Move blocks from src to trg and delete src SeeClientProtocol.concat(java.lang.String, java.lang.String[]).- Throws:
IOException
-
rename
public void rename(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException Rename file or directory.- Throws:
IOException- See Also:
-
truncate
Truncate a file to an indicated size SeeClientProtocol.truncate(java.lang.String, long, java.lang.String).- Throws:
IOException
-
delete
Deprecated.Delete file or directory. SeeClientProtocol.delete(String, boolean).- Throws:
IOException
-
delete
delete file or directory. delete contents of the directory if non empty and recursive set to true- Throws:
IOException- See Also:
-
exists
Implemented using getFileInfo(src)- Throws:
IOException
-
listPaths
Get a partial listing of the indicated directory No block locations need to be fetched- Throws:
IOException
-
listPaths
public DirectoryListing listPaths(String src, byte[] startAfter, boolean needLocation) throws IOException Get a partial listing of the indicated directory Recommend to use HdfsFileStatus.EMPTY_NAME as startAfter if the application wants to fetch a listing starting from the first entry in the directory- Throws:
IOException- See Also:
-
batchedListPaths
public BatchedDirectoryListing batchedListPaths(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException Get a batched listing for the indicated directories- Throws:
IOException- See Also:
-
getFileInfo
Get the file info for a specific file or directory.- Parameters:
src- The string representation of the path to the file- Returns:
- object containing information regarding the file or null if file not found
- Throws:
IOException- See Also:
-
getLocatedFileInfo
public HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException Get the file info for a specific file or directory.- Parameters:
src- The string representation of the path to the fileneedBlockToken- Include block tokens inLocatedBlocks. When block tokens are included, this call is a superset ofgetBlockLocations(String, long).- Returns:
- object containing information regarding the file or null if file not found
- Throws:
IOException- See Also:
-
isFileClosed
Close status of a file- Returns:
- true if file is already closed
- Throws:
IOException
-
getFileLinkInfo
Get the file info for a specific file or directory. If src refers to a symlink then the FileStatus of the link is returned.- Parameters:
src- path to a file or directory. For description of exceptions thrown- Throws:
IOException- See Also:
-
clearDataEncryptionKey
@Private public void clearDataEncryptionKey() -
newDataEncryptionKey
Description copied from interface:DataEncryptionKeyFactoryCreates a new DataEncryptionKey.- Specified by:
newDataEncryptionKeyin interfaceDataEncryptionKeyFactory- Returns:
- DataEncryptionKey newly created
- Throws:
IOException- for any error
-
getEncryptionKey
-
getFileChecksumWithCombineMode
public org.apache.hadoop.fs.FileChecksum getFileChecksumWithCombineMode(String src, long length) throws IOException Get the checksum of the whole file or a range of the file. Note that the range always starts from the beginning of the file. The file can be in replicated form, or striped mode. Depending on the dfs.checksum.combine.mode, checksums may or may not be comparable between different block layout forms.- Parameters:
src- The file pathlength- the length of the range, i.e., the range is [0, length]- Returns:
- The checksum
- Throws:
IOException- See Also:
-
getFileChecksum
public org.apache.hadoop.fs.MD5MD5CRC32FileChecksum getFileChecksum(String src, long length) throws IOException Get the checksum of the whole file or a range of the file. Note that the range always starts from the beginning of the file. The file can be in replicated form, or striped mode. It can be used to checksum and compare two replicated files, or two striped files, but not applicable for two files of different block layout forms.- Parameters:
src- The file pathlength- the length of the range, i.e., the range is [0, length]- Returns:
- The checksum
- Throws:
IOException- See Also:
-
getBlockLocations
- Throws:
IOException
-
connectToDN
protected IOStreamPair connectToDN(DatanodeInfo dn, int timeout, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) throws IOException - Throws:
IOException
-
inferChecksumTypeByReading
protected org.apache.hadoop.util.DataChecksum.Type inferChecksumTypeByReading(LocatedBlock lb, DatanodeInfo dn) throws IOException Infer the checksum type for a replica by sending an OP_READ_BLOCK for the first byte of that replica. This is used for compatibility with older HDFS versions which did not include the checksum type in OpBlockChecksumResponseProto.- Parameters:
lb- the located blockdn- the connected datanode- Returns:
- the inferred checksum type
- Throws:
IOException- if an error occurs
-
setPermission
public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Set permissions to a file or directory.- Parameters:
src- path name.permission- permission to set to- Throws:
IOException- See Also:
-
setOwner
Set file or directory owner.- Parameters:
src- path name.username- user id.groupname- user group.- Throws:
IOException- See Also:
-
getDiskStatus
- Throws:
IOException- See Also:
-
getStateAtIndex
public static long getStateAtIndex(long[] states, int index) -
getMissingBlocksCount
Returns count of blocks with no good replicas left. Normally should be zero.- Throws:
IOException
-
getMissingReplOneBlocksCount
Returns count of blocks with replication factor 1 and have lost the only replica.- Throws:
IOException
-
getPendingDeletionBlocksCount
Returns count of blocks pending on deletion.- Throws:
IOException
-
getLowRedundancyBlocksCount
Returns aggregated count of blocks with less redundancy.- Throws:
IOException
-
getCorruptBlocksCount
Returns count of blocks with at least one replica marked corrupt.- Throws:
IOException
-
getBytesInFutureBlocks
Returns number of bytes that reside in Blocks with future generation stamps.- Returns:
- Bytes in Blocks with future generation stamps.
- Throws:
IOException
-
listCorruptFileBlocks
- Returns:
- a list in which each entry describes a corrupt file/block
- Throws:
IOException
-
datanodeReport
- Throws:
IOException
-
getDatanodeStorageReport
public DatanodeStorageReport[] getDatanodeStorageReport(HdfsConstants.DatanodeReportType type) throws IOException - Throws:
IOException
-
setSafeMode
Enter, leave or get safe mode.- Throws:
IOException- See Also:
-
setSafeMode
public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException Enter, leave or get safe mode.- Parameters:
action- One of SafeModeAction.GET, SafeModeAction.ENTER and SafeModeActiob.LEAVEisChecked- If true, then check only active namenode's safemode status, else check first namenode's status.- Throws:
IOException- See Also:
-
createSnapshot
Create one snapshot.- Parameters:
snapshotRoot- The directory where the snapshot is to be takensnapshotName- Name of the snapshot- Returns:
- the snapshot path.
- Throws:
IOException- See Also:
-
deleteSnapshot
Delete a snapshot of a snapshottable directory.- Parameters:
snapshotRoot- The snapshottable directory that the to-be-deleted snapshot belongs tosnapshotName- The name of the to-be-deleted snapshot- Throws:
IOException- See Also:
-
renameSnapshot
public void renameSnapshot(String snapshotDir, String snapshotOldName, String snapshotNewName) throws IOException Rename a snapshot.- Parameters:
snapshotDir- The directory path where the snapshot was takensnapshotOldName- Old name of the snapshotsnapshotNewName- New name of the snapshot- Throws:
IOException- See Also:
-
getSnapshottableDirListing
Get all the current snapshottable directories.- Returns:
- All the current snapshottable directories
- Throws:
IOException- See Also:
-
getSnapshotListing
Get listing of all the snapshots for a snapshottable directory.- Returns:
- Information about all the snapshots for a snapshottable directory
- Throws:
IOException- If an I/O error occurred- See Also:
-
allowSnapshot
Allow snapshot on a directory.- Throws:
IOException- See Also:
-
disallowSnapshot
Disallow snapshot on a directory.- Throws:
IOException- See Also:
-
getSnapshotDiffReport
public SnapshotDiffReport getSnapshotDiffReport(String snapshotDir, String fromSnapshot, String toSnapshot) throws IOException Get the difference between two snapshots, or between a snapshot and the current tree of a directory. -
getSnapshotDiffReportListing
public SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotDir, String fromSnapshot, String toSnapshot, byte[] startPath, int index) throws IOException Get the difference between two snapshots of a directory iteratively. -
addCacheDirective
- Throws:
IOException
-
modifyCacheDirective
public void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException - Throws:
IOException
-
removeCacheDirective
- Throws:
IOException
-
listCacheDirectives
public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) throws IOException - Throws:
IOException
-
addCachePool
- Throws:
IOException
-
modifyCachePool
- Throws:
IOException
-
removeCachePool
- Throws:
IOException
-
listCachePools
- Throws:
IOException
-
refreshNodes
Refresh the hosts and exclude files. (Rereads them.) SeeClientProtocol.refreshNodes()for more details.- Throws:
IOException- See Also:
-
metaSave
Dumps DFS data structures into specified file.- Throws:
IOException- See Also:
-
setBalancerBandwidth
Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec. SeeClientProtocol.setBalancerBandwidth(long)for more details.- Throws:
IOException- See Also:
-
finalizeUpgrade
- Throws:
IOException- See Also:
-
upgradeStatus
- Throws:
IOException- See Also:
-
mkdirs
Deprecated.- Throws:
IOException
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission permission, boolean createParent) throws IOException Create a directory (or hierarchy of directories) with the given name and permission.- Parameters:
src- The path of the directory being createdpermission- The permission of the directory being created. If permission == null, useFsPermission.getDirDefault().createParent- create missing parent directory if true- Returns:
- True if the operation success.
- Throws:
IOException- See Also:
-
primitiveMkdir
public boolean primitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission) throws IOException Same {mkdirs(String, FsPermission, boolean)except that the permissions has already been masked against umask.- Throws:
IOException
-
primitiveMkdir
public boolean primitiveMkdir(String src, org.apache.hadoop.fs.permission.FsPermission absPermission, boolean createParent) throws IOException Same {mkdirs(String, FsPermission, boolean)except that the permissions has already been masked against umask.- Throws:
IOException
-
setTimes
set the modification and access time of a file.- Throws:
IOException- See Also:
-
toString
-
getDefaultReadCachingStrategy
-
getDefaultWriteCachingStrategy
-
getClientContext
-
modifyAclEntries
public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Throws:
IOException
-
removeAclEntries
public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Throws:
IOException
-
removeDefaultAcl
- Throws:
IOException
-
removeAcl
- Throws:
IOException
-
setAcl
public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Throws:
IOException
-
getAclStatus
- Throws:
IOException
-
createEncryptionZone
- Throws:
IOException
-
getEZForPath
- Throws:
IOException
-
listEncryptionZones
- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) throws IOException - Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus> listReencryptionStatus() throws IOException- Throws:
IOException
-
setErasureCodingPolicy
- Throws:
IOException
-
unsetErasureCodingPolicy
- Throws:
IOException
-
getECTopologyResultForPolicies
public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException - Throws:
IOException
-
setXAttr
public void setXAttr(String src, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Throws:
IOException
-
getXAttr
- Throws:
IOException
-
getXAttrs
- Throws:
IOException
-
getXAttrs
- Throws:
IOException
-
listXAttrs
- Throws:
IOException
-
removeXAttr
- Throws:
IOException
-
checkAccess
public void checkAccess(String src, org.apache.hadoop.fs.permission.FsAction mode) throws IOException - Throws:
IOException
-
getErasureCodingPolicies
- Throws:
IOException
-
getErasureCodingCodecs
- Throws:
IOException
-
addErasureCodingPolicies
public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException - Throws:
IOException
-
removeErasureCodingPolicy
- Throws:
IOException
-
enableErasureCodingPolicy
- Throws:
IOException
-
disableErasureCodingPolicy
- Throws:
IOException
-
getInotifyEventStream
- Throws:
IOException
-
getInotifyEventStream
- Throws:
IOException
-
newConnectedPeer
public Peer newConnectedPeer(InetSocketAddress addr, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, DatanodeID datanodeId) throws IOException - Specified by:
newConnectedPeerin interfaceRemotePeerFactory- Parameters:
addr- The address to connect to.blockToken- Token used during optional SASL negotiationdatanodeId- ID of destination DataNode- Returns:
- A new Peer connected to the address.
- Throws:
IOException- If there was an error connecting or creating the remote socket, encrypted stream, etc.
-
getKeyProviderUri
- Specified by:
getKeyProviderUriin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Throws:
IOException
-
getKeyProvider
- Specified by:
getKeyProviderin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Throws:
IOException
-
setKeyProvider
@VisibleForTesting public void setKeyProvider(org.apache.hadoop.crypto.key.KeyProvider provider) -
getSaslDataTransferClient
Returns the SaslDataTransferClient configured for this DFSClient.- Returns:
- SaslDataTransferClient configured for this DFSClient
-
getErasureCodingPolicy
Get the erasure coding policy information for the specified path- Parameters:
src- path to get the information for- Returns:
- Returns the policy information if file or directory on the path is erasure coded, null otherwise. Null will be returned if directory or file has REPLICATION policy.
- Throws:
IOException
-
satisfyStoragePolicy
Satisfy storage policy for an existing file/directory.- Parameters:
src- file/directory name- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles() throws IOExceptionDeprecated.Get a remote iterator to the open files list managed by NameNode.- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(String path) throws IOException Get a remote iterator to the open files list by path, managed by NameNode.- Parameters:
path-- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) throws IOException Get a remote iterator to the open files list by type, managed by NameNode.- Parameters:
openFilesTypes-- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException Get a remote iterator to the open files list by type and path, managed by NameNode.- Parameters:
openFilesTypes-path-- Throws:
IOException
-
msync
A blocking call to wait for Observer NameNode state ID to reach to the current client state ID. Current client state ID is given by the client alignment context. An assumption is that client alignment context has the state ID set at this point. This is become ObserverReadProxyProvider sets up the initial state ID when it is being created.- Throws:
IOException
-
getHAServiceState
@VisibleForTesting public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState() throws IOExceptionAn unblocking call to get the HA service state of NameNode.- Returns:
- HA state of NameNode
- Throws:
IOException
-
getDeadNodes
If deadNodeDetectionEnabled is true, return the dead nodes that detected by all the DFSInputStreams in the same client. Otherwise return the dead nodes that detected by given DFSInputStream. -
isDeadNode
If deadNodeDetectionEnabled is true, judgement based on whether this datanode is included or not in DeadNodeDetector. Otherwise judgment based given DFSInputStream. -
addNodeToDeadNodeDetector
Add given datanode in DeadNodeDetector. -
removeNodeFromDeadNodeDetector
public void removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Remove given datanode from DeadNodeDetector. -
removeNodeFromDeadNodeDetector
public void removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, LocatedBlocks locatedBlocks) Remove datanodes that given block placed on from DeadNodeDetector. -
getDeadNodeDetector
Obtain DeadNodeDetector of the current client. -
getLocatedBlockRefresher
Obtain LocatedBlocksRefresher of the current client. -
addLocatedBlocksRefresh
Adds theDFSInputStreamto theLocatedBlocksRefresher, so that the underlyingLocatedBlocksis periodically refreshed. -
removeLocatedBlocksRefresh
Removes theDFSInputStreamfrom theLocatedBlocksRefresher, so that the underlyingLocatedBlocksis no longer periodically refreshed.- Parameters:
dfsInputStream-
-
slowDatanodeReport
- Throws:
IOException
-
getEnclosingRoot
- Throws:
IOException
-
HdfsDataInputStreaminstead.