Package org.apache.hadoop.hdfs
Class DistributedFileSystem
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.fs.FileSystem
org.apache.hadoop.hdfs.DistributedFileSystem
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.hadoop.conf.Configurable,org.apache.hadoop.crypto.key.KeyProviderTokenIssuer,org.apache.hadoop.fs.BatchListingOperations,org.apache.hadoop.fs.BulkDeleteSource,org.apache.hadoop.fs.LeaseRecoverable,org.apache.hadoop.fs.PathCapabilities,org.apache.hadoop.fs.SafeMode,org.apache.hadoop.fs.WithErasureCoding,org.apache.hadoop.security.token.DelegationTokenIssuer
- Direct Known Subclasses:
ViewDistributedFileSystem
@LimitedPrivate({"MapReduce","HBase"})
@Unstable
public class DistributedFileSystem
extends org.apache.hadoop.fs.FileSystem
implements org.apache.hadoop.crypto.key.KeyProviderTokenIssuer, org.apache.hadoop.fs.BatchListingOperations, org.apache.hadoop.fs.LeaseRecoverable, org.apache.hadoop.fs.SafeMode, org.apache.hadoop.fs.WithErasureCoding
Implementation of the abstract FileSystem for the DFS system.
This object is the way end-user code interacts with a Hadoop
DistributedFileSystem.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final classHdfsDataOutputStreamBuilder provides the HDFS-specific capabilities to write file on HDFS.Nested classes/interfaces inherited from class org.apache.hadoop.fs.FileSystem
org.apache.hadoop.fs.FileSystem.DirectoryEntries, org.apache.hadoop.fs.FileSystem.Statistics -
Field Summary
Fields inherited from class org.apache.hadoop.fs.FileSystem
DEFAULT_FS, FS_DEFAULT_NAME_KEY, LOG, SHUTDOWN_HOOK_PRIORITY, statistics, TRASH_PREFIX, USER_HOME_PREFIXFields inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
TOKEN_LOG -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidaccess(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) longlongaddCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Add a new CacheDirective.voidaddCachePool(CachePoolInfo info) Add a cache pool.addErasureCodingPolicies(ErasureCodingPolicy[] policies) Add Erasure coding policies to HDFS.voidallowSnapshot(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress, boolean appendToNewBlock) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress) Append to an existing file (optional operation).org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) Append to an existing file (optional operation).appendFile(org.apache.hadoop.fs.Path path) Create aDistributedFileSystem.HdfsDataOutputStreamBuilderto append a file on DFS.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.LocatedFileStatus>>batchedListLocatedStatusIterator(List<org.apache.hadoop.fs.Path> paths) org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.FileStatus>>batchedListStatusIterator(List<org.apache.hadoop.fs.Path> paths) protected URIcanonicalizeUri(URI uri) voidclose()voidconcat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) Move blocks from srcs to trg and delete srcs afterwards.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) Same ascreate(Path, FsPermission, boolean, int, short, long, Progressable)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> cflags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) voidcreateEncryptionZone(org.apache.hadoop.fs.Path path, String keyName) createFile(org.apache.hadoop.fs.Path path) Create a HdfsDataOutputStreamBuilder to create a file on DFS.org.apache.hadoop.fs.MultipartUploaderBuildercreateMultipartUploader(org.apache.hadoop.fs.Path basePath) org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) Same as create(), except fails if parent directory doesn't already exist.protected HdfsPathHandlecreatePathHandle(org.apache.hadoop.fs.FileStatus st, org.apache.hadoop.fs.Options.HandleOpt... opts) Create a handle to an HDFS file.org.apache.hadoop.fs.PathcreateSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) voidcreateSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) booleandelete(org.apache.hadoop.fs.Path f, boolean recursive) voiddeleteSnapshot(org.apache.hadoop.fs.Path snapshotDir, String snapshotName) voiddisableErasureCodingPolicy(String ecPolicyName) Disable erasure coding policy.voiddisallowSnapshot(org.apache.hadoop.fs.Path path) voidenableErasureCodingPolicy(String ecPolicyName) Enable erasure coding policy.voidFinalize previously upgraded files system state.protected org.apache.hadoop.fs.PathfixRelativePart(org.apache.hadoop.fs.Path p) org.apache.hadoop.fs.permission.AclStatusgetAclStatus(org.apache.hadoop.fs.Path path) org.apache.hadoop.security.token.DelegationTokenIssuer[]Retrieve all the erasure coding codecs and coders supported by this file system.Retrieve all the erasure coding policies supported by this file system, including enabled, disabled and removed policies, but excluding REPLICATION policy.longReturns number of bytes within blocks with future generation stamp.Get a canonical service name for this file system.org.apache.hadoop.fs.ContentSummarygetContentSummary(org.apache.hadoop.fs.Path f) longReturns count of blocks with at least one replica marked corrupt.longprotected intshortorg.apache.hadoop.security.token.Token<DelegationTokenIdentifier>getDelegationToken(String renewer) getECTopologyResultForPolicies(String... policyNames) Verifies if the given policies are supported in the given cluster setup.org.apache.hadoop.fs.PathgetEnclosingRoot(org.apache.hadoop.fs.Path path) Return path of the enclosing root for a given path The enclosing root path is a common ancestor that should be used for temp and staging dirs as well as within encryption zones and other restricted directories.getErasureCodingPolicy(org.apache.hadoop.fs.Path path) Get erasure coding policy information for the specified path.getErasureCodingPolicyName(org.apache.hadoop.fs.FileStatus fileStatus) getEZForPath(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) The returned BlockLocation will have different formats for replicated and erasure coded file.org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileChecksumgetFileChecksum(org.apache.hadoop.fs.Path f, long length) org.apache.hadoop.fs.FileEncryptionInfogetFileEncryptionInfo(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.FileStatusgetFileLinkStatus(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileStatusgetFileStatus(org.apache.hadoop.fs.Path f) Returns the stat information about the file.Returns the hedged read metrics object for this client.org.apache.hadoop.fs.PathgetInotifyEventStream(long lastReadTxid) org.apache.hadoop.crypto.key.KeyProviderorg.apache.hadoop.fs.PathgetLinkTarget(org.apache.hadoop.fs.Path f) getLocatedBlocks(org.apache.hadoop.fs.Path p, long start, long len) Returns LocatedBlocks of the corresponding HDFS file p from offset start for length len.longReturns aggregated count of blocks with less redundancy.longReturns count of blocks with no good replicas left.longReturns count of blocks with replication factor 1 and have lost the only replica.longReturns count of blocks pending on deletion.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(org.apache.hadoop.fs.Path f) Return the protocol scheme for the FileSystem.org.apache.hadoop.fs.FsServerDefaultsRetrieve stats for slow running datanodes.getSnapshotDiffReport(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) Get the difference between two snapshots, or between a snapshot and the current tree of a directory.getSnapshotDiffReportListing(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshotName, String toSnapshotName, String snapshotDiffStartPath, int snapshotDiffIndex) Get the difference between two snapshots of a directory iteratively.getSnapshotListing(org.apache.hadoop.fs.Path snapshotRoot) Get the list of snapshottable directories that are owned by the current user.org.apache.hadoop.fs.FsStatusgetStatus(org.apache.hadoop.fs.Path p) Deprecated.org.apache.hadoop.fs.BlockStoragePolicySpigetStoragePolicy(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.PathgetTrashRoot(org.apache.hadoop.fs.Path path) Get the root directory of Trash for a path in HDFS. 1.Collection<org.apache.hadoop.fs.FileStatus>getTrashRoots(boolean allUsers) Get all the trash roots of HDFS for current user or for all the users. 1.getUri()org.apache.hadoop.fs.Pathbyte[]getXAttrs(org.apache.hadoop.fs.Path path) booleanhasPathCapability(org.apache.hadoop.fs.Path path, String capability) HDFS client capabilities.voidinitialize(URI uri, org.apache.hadoop.conf.Configuration conf) booleanisFileClosed(org.apache.hadoop.fs.Path src) Get the close status of a filebooleanUtility function that returns if the NameNode is in safemode or not.booleanHDFS only.org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry>List cache directives.org.apache.hadoop.fs.RemoteIterator<CachePoolEntry>List all cache pools.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path>listCorruptFileBlocks(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.RemoteIterator<EncryptionZone>protected org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus>listLocatedStatus(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.PathFilter filter) The BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>Deprecated.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) Deprecated.org.apache.hadoop.fs.RemoteIterator<OpenFileEntry>listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus>org.apache.hadoop.fs.FileStatus[]listStatus(org.apache.hadoop.fs.Path p) List all the entries of a directory Note that this operation is not atomic for a large directory.org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus>listStatusIterator(org.apache.hadoop.fs.Path p) Returns a remote iterator so that followup calls are made on demand while consuming the entries.listXAttrs(org.apache.hadoop.fs.Path path) voidbooleanmkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) Create a directory, only when the parent directories exist.booleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) Create a directory and its parent directories.voidmodifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidvoidmodifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) Modify a CacheDirective.voidmodifyCachePool(CachePoolInfo info) Modify an existing cache pool.voidmsync()Synchronize client metadata state with Active NameNode.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.PathHandle fd, int bufferSize) Opens an FSDataInputStream with the indicated file ID extracted from thePathHandle.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f, int bufferSize) protected HdfsDataOutputStreamprimitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) protected booleanprimitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) voidprovisionEZTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) org.apache.hadoop.fs.PathprovisionSnapshotTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) HDFS only.booleanrecoverLease(org.apache.hadoop.fs.Path f) Start the lease recovery of a filevoidreencryptEncryptionZone(org.apache.hadoop.fs.Path zone, HdfsConstants.ReencryptAction action) voidRefreshes the list of hosts and excluded hosts from the configured files.voidremoveAcl(org.apache.hadoop.fs.Path path) voidremoveAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) Remove a CacheDirectiveInfo.voidremoveCachePool(String poolName) Remove a cache pool.voidremoveDefaultAcl(org.apache.hadoop.fs.Path path) voidremoveErasureCodingPolicy(String ecPolicyName) Remove erasure coding policy.voidremoveXAttr(org.apache.hadoop.fs.Path path, String name) booleanrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) voidrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) This rename operation is guaranteed to be atomic.voidrenameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) protected org.apache.hadoop.fs.PathresolveLink(org.apache.hadoop.fs.Path f) booleanenable/disable/check restoreFaileStorage.longRolls the edit log on the active NameNode.Rolling upgrade: prepare/finalize/query.voidsatisfyStoragePolicy(org.apache.hadoop.fs.Path path) Set the source path to satisfy storage policy.voidSave namespace image.booleansaveNamespace(long timeWindow, long txGap) Save namespace image.voidvoidsetBalancerBandwidth(long bandwidth) Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.voidsetErasureCodingPolicy(org.apache.hadoop.fs.Path path, String ecPolicyName) Set the source path to the specified erasure coding policy.voidvoidsetPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) voidsetQuota(org.apache.hadoop.fs.Path src, long namespaceQuota, long storagespaceQuota) Set a directory's quotasvoidsetQuotaByStorageType(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.StorageType type, long quota) Set the per type storage quota of a directory.booleansetReplication(org.apache.hadoop.fs.Path src, short replication) booleansetSafeMode(org.apache.hadoop.fs.SafeModeAction action) Enter, leave or get safe mode.booleansetSafeMode(org.apache.hadoop.fs.SafeModeAction action, boolean isChecked) Enter, leave or get safe mode.booleanDeprecated.booleansetSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) Deprecated.please instead usesetSafeMode(SafeModeAction, boolean).voidsetStoragePolicy(org.apache.hadoop.fs.Path src, String policyName) Set the source path to the specified storage policy.voidsetTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) voidsetVerifyChecksum(boolean verifyChecksum) voidsetWorkingDirectory(org.apache.hadoop.fs.Path dir) voidsetXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) org.apache.hadoop.fs.RemoteIterator<SnapshotDiffReportListing>snapshotDiffReportListingRemoteIterator(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) Returns a remote iterator so that followup calls are made on demand while consuming the SnapshotDiffReportListing entries.booleantoString()booleantruncate(org.apache.hadoop.fs.Path f, long newLength) voidunsetErasureCodingPolicy(org.apache.hadoop.fs.Path path) Unset the erasure coding policy from the source path.voidunsetStoragePolicy(org.apache.hadoop.fs.Path src) booleanGet status of upgrade - finalized or not.Methods inherited from class org.apache.hadoop.fs.FileSystem
append, append, append, areSymlinksEnabled, cancelDeleteOnExit, checkPath, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, createBulkDelete, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createNewFile, createNonRecursive, createNonRecursive, createSnapshot, delete, deleteOnExit, enableSymlinks, exists, get, get, get, getAllStatistics, getBlockSize, getCanonicalUri, getChildFileSystems, getDefaultBlockSize, getDefaultReplication, getDefaultUri, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getInitialWorkingDirectory, getLength, getLocal, getName, getNamed, getPathHandle, getReplication, getServerDefaults, getStatistics, getStatistics, getStatus, getStorageStatistics, getUsed, getUsed, globStatus, globStatus, isDirectory, isFile, listFiles, listLocatedStatus, listStatus, listStatus, listStatus, listStatusBatch, makeQualified, mkdirs, mkdirs, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, newInstance, newInstance, newInstance, newInstanceLocal, open, open, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveMkdir, printStatistics, processDeleteOnExit, resolvePath, setDefaultUri, setDefaultUri, setWriteChecksum, setXAttr, startLocalOutputMethods inherited from class org.apache.hadoop.conf.Configured
getConf, setConfMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
addDelegationTokens
-
Constructor Details
-
DistributedFileSystem
public DistributedFileSystem()
-
-
Method Details
-
getScheme
Return the protocol scheme for the FileSystem.- Overrides:
getSchemein classorg.apache.hadoop.fs.FileSystem- Returns:
hdfs
-
getUri
- Specified by:
getUriin classorg.apache.hadoop.fs.FileSystem
-
initialize
- Overrides:
initializein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getWorkingDirectory
public org.apache.hadoop.fs.Path getWorkingDirectory()- Specified by:
getWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem
-
getDefaultBlockSize
public long getDefaultBlockSize()- Overrides:
getDefaultBlockSizein classorg.apache.hadoop.fs.FileSystem
-
getDefaultReplication
public short getDefaultReplication()- Overrides:
getDefaultReplicationin classorg.apache.hadoop.fs.FileSystem
-
setWorkingDirectory
public void setWorkingDirectory(org.apache.hadoop.fs.Path dir) - Specified by:
setWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem
-
getHomeDirectory
public org.apache.hadoop.fs.Path getHomeDirectory()- Overrides:
getHomeDirectoryin classorg.apache.hadoop.fs.FileSystem
-
getHedgedReadMetrics
Returns the hedged read metrics object for this client.- Returns:
- object of DFSHedgedReadMetrics
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) throws IOException - Overrides:
getFileBlockLocationsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.Path p, long start, long len) throws IOException The returned BlockLocation will have different formats for replicated and erasure coded file. Please refer toFileSystem.getFileBlockLocations(FileStatus, long, long)for more details.- Overrides:
getFileBlockLocationsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setVerifyChecksum
public void setVerifyChecksum(boolean verifyChecksum) - Overrides:
setVerifyChecksumin classorg.apache.hadoop.fs.FileSystem
-
recoverLease
Start the lease recovery of a file- Specified by:
recoverLeasein interfaceorg.apache.hadoop.fs.LeaseRecoverable- Parameters:
f- a file- Returns:
- true if the file is already closed
- Throws:
IOException- if an error occurs
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws IOException - Specified by:
openin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.PathHandle fd, int bufferSize) throws IOException Opens an FSDataInputStream with the indicated file ID extracted from thePathHandle.- Overrides:
openin classorg.apache.hadoop.fs.FileSystem- Parameters:
fd- Reference to entity in this FileSystem.bufferSize- the size of the buffer to be used.- Throws:
org.apache.hadoop.fs.InvalidPathHandleException- If PathHandle constraints do not holdIOException- On I/O errors
-
getErasureCodingPolicyName
- Specified by:
getErasureCodingPolicyNamein interfaceorg.apache.hadoop.fs.WithErasureCoding
-
createPathHandle
protected HdfsPathHandle createPathHandle(org.apache.hadoop.fs.FileStatus st, org.apache.hadoop.fs.Options.HandleOpt... opts) Create a handle to an HDFS file.- Overrides:
createPathHandlein classorg.apache.hadoop.fs.FileSystem- Parameters:
st- HdfsFileStatus instance from NameNodeopts- Standard handle arguments- Returns:
- A handle to the file.
- Throws:
IllegalArgumentException- If the FileStatus instance refers to a directory, symlink, or another namesystem.UnsupportedOperationException- If opts are not specified or both data and location are not allowed to change.
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException - Specified by:
appendin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress, boolean appendToNewBlock) throws IOException - Overrides:
appendin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException Append to an existing file (optional operation).- Parameters:
f- the existing file to be appended.flag- Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.- Returns:
- Returns instance of
FSDataOutputStream - Throws:
IOException
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException Append to an existing file (optional operation).- Parameters:
f- the existing file to be appended.flag- Flags for the Append operation. CreateFlag.APPEND is mandatory to be present.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.favoredNodes- Favored nodes for new blocks- Returns:
- Returns instance of
FSDataOutputStream - Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException - Specified by:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public HdfsDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, InetSocketAddress[] favoredNodes) throws IOException Same ascreate(Path, FsPermission, boolean, int, short, long, Progressable)with the addition of favoredNodes that is a hint to where the namenode should place the file blocks. The favored nodes hint is not persisted in HDFS. Hence it may be honored at the creation time only. And with favored nodes, blocks will be pinned on the datanodes to prevent balancing move the block. HDFS could move the blocks during replication, to move the blocks from favored nodes. A value of null means no favored nodes for this create- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> cflags, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException - Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
primitiveCreate
protected HdfsDataOutputStream primitiveCreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException - Overrides:
primitiveCreatein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException Same as create(), except fails if parent directory doesn't already exist.- Overrides:
createNonRecursivein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setReplication
- Overrides:
setReplicationin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setStoragePolicy
Set the source path to the specified storage policy.- Overrides:
setStoragePolicyin classorg.apache.hadoop.fs.FileSystem- Parameters:
src- The source path referring to either a directory or a file.policyName- The name of the storage policy.- Throws:
IOException
-
unsetStoragePolicy
- Overrides:
unsetStoragePolicyin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getStoragePolicy
public org.apache.hadoop.fs.BlockStoragePolicySpi getStoragePolicy(org.apache.hadoop.fs.Path path) throws IOException - Overrides:
getStoragePolicyin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getAllStoragePolicies
- Overrides:
getAllStoragePoliciesin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getBytesWithFutureGenerationStamps
Returns number of bytes within blocks with future generation stamp. These are bytes that will be potentially deleted if we forceExit from safe mode.- Returns:
- number of bytes.
- Throws:
IOException
-
getStoragePolicies
Deprecated.Deprecated. PreferFileSystem.getAllStoragePolicies()- Throws:
IOException
-
concat
public void concat(org.apache.hadoop.fs.Path trg, org.apache.hadoop.fs.Path[] psrcs) throws IOException Move blocks from srcs to trg and delete srcs afterwards. The file block sizes must be the same.- Overrides:
concatin classorg.apache.hadoop.fs.FileSystem- Parameters:
trg- existing file to append topsrcs- list of files (same block size, same replication)- Throws:
IOException
-
rename
public boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOException - Specified by:
renamein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
rename
public void rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException This rename operation is guaranteed to be atomic.- Overrides:
renamein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
truncate
- Overrides:
truncatein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
delete
- Specified by:
deletein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getContentSummary
public org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getContentSummaryin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getQuotaUsage
public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getQuotaUsagein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setQuota
public void setQuota(org.apache.hadoop.fs.Path src, long namespaceQuota, long storagespaceQuota) throws IOException Set a directory's quotas- Overrides:
setQuotain classorg.apache.hadoop.fs.FileSystem- Throws:
IOException- See Also:
-
setQuotaByStorageType
public void setQuotaByStorageType(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.StorageType type, long quota) throws IOException Set the per type storage quota of a directory.- Overrides:
setQuotaByStorageTypein classorg.apache.hadoop.fs.FileSystem- Parameters:
src- target directory whose quota is to be modified.type- storage type of the specific storage type quota to be modified.quota- value of the specific storage type quota to be modified. MaybeHdfsConstants.QUOTA_RESETto clear quota by storage type.- Throws:
IOException
-
listStatus
List all the entries of a directory Note that this operation is not atomic for a large directory. The entries of a directory may be fetched from NameNode multiple times. It only guarantees that each name occurs once if a directory undergoes changes between the calls. If any of the the immediate children of the given path f is a symlink, the returned FileStatus object of that children would be represented as a symlink. It will not be resolved to the target path and will not get the target path FileStatus object. The target path will be available via getSymlink on that children's FileStatus object. Since it represents as symlink, isDirectory on that children's FileStatus will return false. If you want to get the FileStatus of target path for that children, you may want to use GetFileStatus API with that children's symlink path. Please seegetFileStatus(Path f)- Specified by:
listStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
listLocatedStatus
protected org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.PathFilter filter) throws IOException The BlockLocation of returned LocatedFileStatus will have different formats for replicated and erasure coded file. Please refer toFileSystem.getFileBlockLocations(FileStatus, long, long)for more details.- Overrides:
listLocatedStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
listStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.FileStatus> listStatusIterator(org.apache.hadoop.fs.Path p) throws IOException Returns a remote iterator so that followup calls are made on demand while consuming the entries. This reduces memory consumption during listing of a large directory.- Overrides:
listStatusIteratorin classorg.apache.hadoop.fs.FileSystem- Parameters:
p- target path- Returns:
- remote iterator
- Throws:
IOException
-
batchedListStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.FileStatus>> batchedListStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException - Specified by:
batchedListStatusIteratorin interfaceorg.apache.hadoop.fs.BatchListingOperations- Throws:
IOException
-
batchedListLocatedStatusIterator
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.PartialListing<org.apache.hadoop.fs.LocatedFileStatus>> batchedListLocatedStatusIterator(List<org.apache.hadoop.fs.Path> paths) throws IOException - Specified by:
batchedListLocatedStatusIteratorin interfaceorg.apache.hadoop.fs.BatchListingOperations- Throws:
IOException
-
mkdir
public boolean mkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Create a directory, only when the parent directories exist. SeeFsPermission.applyUMask(FsPermission)for details of how the permission is applied.- Parameters:
f- The path to createpermission- The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.- Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Create a directory and its parent directories. SeeFsPermission.applyUMask(FsPermission)for details of how the permission is applied.- Specified by:
mkdirsin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- The path to createpermission- The permission. See FsPermission#applyUMask for details about how this is used to calculate the effective permission.- Throws:
IOException
-
primitiveMkdir
protected boolean primitiveMkdir(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission absolutePermission) throws IOException - Overrides:
primitiveMkdirin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
close
- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Overrides:
closein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
toString
-
getClient
-
getStatus
- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getMissingBlocksCount
Returns count of blocks with no good replicas left. Normally should be zero.- Throws:
IOException
-
getPendingDeletionBlocksCount
Returns count of blocks pending on deletion.- Throws:
IOException
-
getMissingReplOneBlocksCount
Returns count of blocks with replication factor 1 and have lost the only replica.- Throws:
IOException
-
getLowRedundancyBlocksCount
Returns aggregated count of blocks with less redundancy.- Throws:
IOException
-
getCorruptBlocksCount
Returns count of blocks with at least one replica marked corrupt.- Throws:
IOException
-
listCorruptFileBlocks
public org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.Path> listCorruptFileBlocks(org.apache.hadoop.fs.Path path) throws IOException - Overrides:
listCorruptFileBlocksin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getDataNodeStats
- Returns:
- datanode statistics.
- Throws:
IOException
-
getDataNodeStats
- Returns:
- datanode statistics for the given type.
- Throws:
IOException
-
setSafeMode
Enter, leave or get safe mode.- Specified by:
setSafeModein interfaceorg.apache.hadoop.fs.SafeMode- Throws:
IOException- See Also:
-
setSafeMode
public boolean setSafeMode(org.apache.hadoop.fs.SafeModeAction action, boolean isChecked) throws IOException Enter, leave or get safe mode.- Specified by:
setSafeModein interfaceorg.apache.hadoop.fs.SafeMode- Parameters:
action- One of SafeModeAction.ENTER, SafeModeAction.LEAVE and SafeModeAction.GET.isChecked- If true check only for Active NNs status, else check first NN's status.- Throws:
IOException
-
setSafeMode
Deprecated.please instead usesetSafeMode(SafeModeAction).Enter, leave or get safe mode.- Throws:
IOException- See Also:
-
setSafeMode
@Deprecated public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException Deprecated.please instead usesetSafeMode(SafeModeAction, boolean).Enter, leave or get safe mode.- Parameters:
action- One of SafeModeAction.ENTER, SafeModeAction.LEAVE and SafeModeAction.GET.isChecked- If true check only for Active NNs status, else check first NN's status.- Throws:
IOException- See Also:
-
saveNamespace
Save namespace image.- Parameters:
timeWindow- NameNode can ignore this command if the latest checkpoint was done within the given time period (in seconds).- Returns:
- true if a new checkpoint has been made
- Throws:
IOException- See Also:
-
saveNamespace
Save namespace image. NameNode always does the checkpoint.- Throws:
IOException
-
rollEdits
Rolls the edit log on the active NameNode. Requires super-user privileges.- Returns:
- the transaction ID of the newly created segment
- Throws:
IOException- See Also:
-
restoreFailedStorage
enable/disable/check restoreFaileStorage.- Throws:
IOException- See Also:
-
refreshNodes
Refreshes the list of hosts and excluded hosts from the configured files.- Throws:
IOException
-
finalizeUpgrade
Finalize previously upgraded files system state.- Throws:
IOException
-
upgradeStatus
Get status of upgrade - finalized or not.- Returns:
- true if upgrade is finalized or if no upgrade is in progress and false otherwise.
- Throws:
IOException
-
rollingUpgrade
public RollingUpgradeInfo rollingUpgrade(HdfsConstants.RollingUpgradeAction action) throws IOException Rolling upgrade: prepare/finalize/query.- Throws:
IOException
-
metaSave
- Throws:
IOException
-
getServerDefaults
- Overrides:
getServerDefaultsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getFileStatus
public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws IOException Returns the stat information about the file. If the given path is a symlink, the path will be resolved to a target path and it will get the resolved path's FileStatus object. It will not be represented as a symlink and isDirectory API returns true if the resolved path is a directory, false otherwise.- Specified by:
getFileStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
FileNotFoundException- if the file does not exist.IOException
-
msync
Synchronize client metadata state with Active NameNode.In HA the client synchronizes its state with the Active NameNode in order to guarantee subsequent read consistency from Observer Nodes.
- Overrides:
msyncin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createSymlink
public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws IOException - Overrides:
createSymlinkin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
supportsSymlinks
public boolean supportsSymlinks()- Overrides:
supportsSymlinksin classorg.apache.hadoop.fs.FileSystem
-
getFileLinkStatus
public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getFileLinkStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getLinkTarget
- Overrides:
getLinkTargetin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
resolveLink
- Overrides:
resolveLinkin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f) throws IOException - Overrides:
getFileChecksumin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getFileChecksum
public org.apache.hadoop.fs.FileChecksum getFileChecksum(org.apache.hadoop.fs.Path f, long length) throws IOException - Overrides:
getFileChecksumin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setPermission
public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException - Overrides:
setPermissionin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setOwner
public void setOwner(org.apache.hadoop.fs.Path p, String username, String groupname) throws IOException - Overrides:
setOwnerin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setTimes
- Overrides:
setTimesin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getDefaultPort
protected int getDefaultPort()- Overrides:
getDefaultPortin classorg.apache.hadoop.fs.FileSystem
-
getDelegationToken
public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(String renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getDelegationTokenin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setBalancerBandwidth
Requests the namenode to tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec. The bandwidth parameter is the max number of bytes per second of network bandwidth to be used by a datanode during balancing.- Parameters:
bandwidth- Balancer bandwidth in bytes per second for all datanodes.- Throws:
IOException
-
getCanonicalServiceName
Get a canonical service name for this file system. If the URI is logical, the hostname part of the URI will be returned.- Specified by:
getCanonicalServiceNamein interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getCanonicalServiceNamein classorg.apache.hadoop.fs.FileSystem- Returns:
- a service string that uniquely identifies this file system.
-
canonicalizeUri
- Overrides:
canonicalizeUriin classorg.apache.hadoop.fs.FileSystem
-
isInSafeMode
Utility function that returns if the NameNode is in safemode or not. In HA mode, this API will return only ActiveNN's safemode status.- Returns:
- true if NameNode is in safemode, false otherwise.
- Throws:
IOException- when there is an issue communicating with the NameNode
-
isSnapshotTrashRootEnabled
HDFS only. Returns if the NameNode enabled the snapshot trash root configuration dfs.namenode.snapshot.trashroot.enabled- Returns:
- true if NameNode enabled snapshot trash root
- Throws:
IOException- when there is an issue communicating with the NameNode
-
allowSnapshot
- Throws:
IOException- See Also:
-
disallowSnapshot
- Throws:
IOException- See Also:
-
createSnapshot
public org.apache.hadoop.fs.Path createSnapshot(org.apache.hadoop.fs.Path path, String snapshotName) throws IOException - Overrides:
createSnapshotin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
renameSnapshot
public void renameSnapshot(org.apache.hadoop.fs.Path path, String snapshotOldName, String snapshotNewName) throws IOException - Overrides:
renameSnapshotin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getSnapshottableDirListing
Get the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.- Returns:
- The list of all the current snapshottable directories.
- Throws:
IOException- If an I/O error occurred.
-
getSnapshotListing
public SnapshotStatus[] getSnapshotListing(org.apache.hadoop.fs.Path snapshotRoot) throws IOException - Returns:
- all the snapshots for a snapshottable directory
- Throws:
IOException
-
deleteSnapshot
public void deleteSnapshot(org.apache.hadoop.fs.Path snapshotDir, String snapshotName) throws IOException - Overrides:
deleteSnapshotin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
snapshotDiffReportListingRemoteIterator
public org.apache.hadoop.fs.RemoteIterator<SnapshotDiffReportListing> snapshotDiffReportListingRemoteIterator(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException Returns a remote iterator so that followup calls are made on demand while consuming the SnapshotDiffReportListing entries. This reduces memory consumption overhead in case the snapshotDiffReport is huge.- Parameters:
snapshotDir- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.- Returns:
- Remote iterator
- Throws:
IOException
-
getSnapshotDiffReport
public SnapshotDiffReport getSnapshotDiffReport(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshot, String toSnapshot) throws IOException Get the difference between two snapshots, or between a snapshot and the current tree of a directory. -
getSnapshotDiffReportListing
public SnapshotDiffReportListing getSnapshotDiffReportListing(org.apache.hadoop.fs.Path snapshotDir, String fromSnapshotName, String toSnapshotName, String snapshotDiffStartPath, int snapshotDiffIndex) throws IOException Get the difference between two snapshots of a directory iteratively.- Parameters:
snapshotDir- full path of the directory where snapshots are taken.fromSnapshotName- snapshot name of the from point. Null indicates the current tree.toSnapshotName- snapshot name of the to point. Null indicates the current tree.snapshotDiffStartPath- path relative to the snapshottable root directory from where the snapshotdiff computation needs to start.snapshotDiffIndex- index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call. -1 indicates the diff computation needs to start right from the start path.- Returns:
- the difference report represented as a
SnapshotDiffReportListing. - Throws:
IOException- if an I/O error occurred.
-
isFileClosed
Get the close status of a file- Specified by:
isFileClosedin interfaceorg.apache.hadoop.fs.LeaseRecoverable- Parameters:
src- The path to the file- Returns:
- return true if file is closed
- Throws:
FileNotFoundException- if the file does not exist.IOException- If an I/O error occurred
-
addCacheDirective
- Throws:
IOException- See Also:
-
addCacheDirective
Add a new CacheDirective.- Parameters:
info- Information about a directive to add.flags-CacheFlags to use for this operation.- Returns:
- the ID of the directive that was created.
- Throws:
IOException- if the directive could not be added
-
modifyCacheDirective
- Throws:
IOException- See Also:
-
modifyCacheDirective
public void modifyCacheDirective(CacheDirectiveInfo info, EnumSet<CacheFlag> flags) throws IOException Modify a CacheDirective.- Parameters:
info- Information about the directive to modify. You must set the ID to indicate which CacheDirective you want to modify.flags-CacheFlags to use for this operation.- Throws:
IOException- if the directive could not be modified
-
removeCacheDirective
Remove a CacheDirectiveInfo.- Parameters:
id- identifier of the CacheDirectiveInfo to remove- Throws:
IOException- if the directive could not be removed
-
listCacheDirectives
public org.apache.hadoop.fs.RemoteIterator<CacheDirectiveEntry> listCacheDirectives(CacheDirectiveInfo filter) throws IOException List cache directives. Incrementally fetches results from the server.- Parameters:
filter- Filter parameters to use when listing the directives, null to list all directives visible to us.- Returns:
- A RemoteIterator which returns CacheDirectiveInfo objects.
- Throws:
IOException
-
addCachePool
Add a cache pool.- Parameters:
info- The request to add a cache pool.- Throws:
IOException- If the request could not be completed.
-
modifyCachePool
Modify an existing cache pool.- Parameters:
info- The request to modify a cache pool.- Throws:
IOException- If the request could not be completed.
-
removeCachePool
Remove a cache pool.- Parameters:
poolName- Name of the cache pool to remove.- Throws:
IOException- if the cache pool did not exist, or could not be removed.
-
listCachePools
List all cache pools.- Returns:
- A remote iterator from which you can get CachePoolEntry objects. Requests will be made as needed.
- Throws:
IOException- If there was an error listing cache pools.
-
modifyAclEntries
public void modifyAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Overrides:
modifyAclEntriesin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Overrides:
removeAclEntriesin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
removeDefaultAcl
- Overrides:
removeDefaultAclin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
removeAcl
- Overrides:
removeAclin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setAcl
public void setAcl(org.apache.hadoop.fs.Path path, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Overrides:
setAclin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getAclStatus
public org.apache.hadoop.fs.permission.AclStatus getAclStatus(org.apache.hadoop.fs.Path path) throws IOException - Overrides:
getAclStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createEncryptionZone
- Throws:
IOException
-
getEZForPath
- Throws:
IOException
-
listEncryptionZones
- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(org.apache.hadoop.fs.Path zone, HdfsConstants.ReencryptAction action) throws IOException - Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.RemoteIterator<ZoneReencryptionStatus> listReencryptionStatus() throws IOException- Throws:
IOException
-
getFileEncryptionInfo
public org.apache.hadoop.fs.FileEncryptionInfo getFileEncryptionInfo(org.apache.hadoop.fs.Path path) throws IOException - Throws:
IOException
-
provisionEZTrash
public void provisionEZTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException - Throws:
IOException
-
provisionSnapshotTrash
public org.apache.hadoop.fs.Path provisionSnapshotTrash(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsPermission trashPermission) throws IOException HDFS only. Provision snapshottable directory trash.- Parameters:
path- Path to a snapshottable directory.trashPermission- Expected FsPermission of the trash root.- Returns:
- Path of the provisioned trash root
- Throws:
IOException
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Overrides:
setXAttrin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getXAttr
- Overrides:
getXAttrin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getXAttrs
- Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getXAttrs
public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, List<String> names) throws IOException - Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
listXAttrs
- Overrides:
listXAttrsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
removeXAttr
- Overrides:
removeXAttrin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
access
public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException - Overrides:
accessin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getKeyProviderUri
- Specified by:
getKeyProviderUriin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Throws:
IOException
-
getKeyProvider
- Specified by:
getKeyProviderin interfaceorg.apache.hadoop.crypto.key.KeyProviderTokenIssuer- Throws:
IOException
-
getAdditionalTokenIssuers
public org.apache.hadoop.security.token.DelegationTokenIssuer[] getAdditionalTokenIssuers() throws IOException- Specified by:
getAdditionalTokenIssuersin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer- Overrides:
getAdditionalTokenIssuersin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getInotifyEventStream
- Throws:
IOException
-
getInotifyEventStream
- Throws:
IOException
-
setErasureCodingPolicy
public void setErasureCodingPolicy(org.apache.hadoop.fs.Path path, String ecPolicyName) throws IOException Set the source path to the specified erasure coding policy.- Specified by:
setErasureCodingPolicyin interfaceorg.apache.hadoop.fs.WithErasureCoding- Parameters:
path- The directory to set the policyecPolicyName- The erasure coding policy name.- Throws:
IOException
-
satisfyStoragePolicy
Set the source path to satisfy storage policy.- Overrides:
satisfyStoragePolicyin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The source path referring to either a directory or a file.- Throws:
IOException
-
getErasureCodingPolicy
public ErasureCodingPolicy getErasureCodingPolicy(org.apache.hadoop.fs.Path path) throws IOException Get erasure coding policy information for the specified path.- Parameters:
path- The path of the file or directory- Returns:
- Returns the policy information if file or directory on the path is erasure coded, null otherwise. Null will be returned if directory or file has REPLICATION policy.
- Throws:
IOException
-
getAllErasureCodingPolicies
Retrieve all the erasure coding policies supported by this file system, including enabled, disabled and removed policies, but excluding REPLICATION policy.- Returns:
- all erasure coding policies supported by this file system.
- Throws:
IOException
-
getAllErasureCodingCodecs
Retrieve all the erasure coding codecs and coders supported by this file system.- Returns:
- all erasure coding codecs and coders supported by this file system.
- Throws:
IOException
-
addErasureCodingPolicies
public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response; policy states will be set to DISABLED automatically.- Parameters:
policies- The user defined ec policy list to add.- Returns:
- Return the response list of adding operations.
- Throws:
IOException
-
removeErasureCodingPolicy
Remove erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be removed.- Throws:
IOException
-
enableErasureCodingPolicy
Enable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be enabled.- Throws:
IOException
-
disableErasureCodingPolicy
Disable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be disabled.- Throws:
IOException
-
unsetErasureCodingPolicy
Unset the erasure coding policy from the source path.- Parameters:
path- The directory to unset the policy- Throws:
IOException
-
getECTopologyResultForPolicies
public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException Verifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.- Parameters:
policyNames- name of policies.- Returns:
- the result if the given policies are supported in the cluster setup
- Throws:
IOException
-
getTrashRoot
public org.apache.hadoop.fs.Path getTrashRoot(org.apache.hadoop.fs.Path path) Get the root directory of Trash for a path in HDFS. 1. File in encryption zone returns /ez1/.Trash/username 2. File in snapshottable directory returns /snapdir1/.Trash/username if dfs.namenode.snapshot.trashroot.enabled is set to true. 3. In other cases, or encountered exception when checking the encryption zone or when checking snapshot root of the path, returns /users/username/.Trash Caller appends either Current or checkpoint timestamp for trash destination- Overrides:
getTrashRootin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- the trash root of the path to be determined.- Returns:
- trash root
-
getTrashRoots
Get all the trash roots of HDFS for current user or for all the users. 1. File deleted from encryption zones e.g., ez1 rooted at /ez1 has its trash root at /ez1/.Trash/$USER 2. File deleted from snapshottable directories if dfs.namenode.snapshot.trashroot.enabled is set to true. e.g., snapshottable directory /snapdir1 has its trash root at /snapdir1/.Trash/$USER 3. File deleted from other directories /user/username/.Trash- Overrides:
getTrashRootsin classorg.apache.hadoop.fs.FileSystem- Parameters:
allUsers- return trashRoots of all users if true, used by emptier- Returns:
- trash roots of HDFS
-
fixRelativePart
protected org.apache.hadoop.fs.Path fixRelativePart(org.apache.hadoop.fs.Path p) - Overrides:
fixRelativePartin classorg.apache.hadoop.fs.FileSystem
-
createFile
Create a HdfsDataOutputStreamBuilder to create a file on DFS. Similar toFileSystem.create(Path), file is overwritten by default.- Overrides:
createFilein classorg.apache.hadoop.fs.FileSystem- Parameters:
path- the path of the file to create.- Returns:
- A HdfsDataOutputStreamBuilder for creating a file.
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles() throws IOExceptionDeprecated.Returns a RemoteIterator which can be used to list all open files currently managed by the NameNode. For large numbers of open files, iterator will fetch the list in batches of configured size.Since the list is fetched in batches, it does not represent a consistent snapshot of the all open files.
This method can only be called by HDFS superusers.
- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes) throws IOException Deprecated.- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.RemoteIterator<OpenFileEntry> listOpenFiles(EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Throws:
IOException
-
appendFile
Create aDistributedFileSystem.HdfsDataOutputStreamBuilderto append a file on DFS.- Overrides:
appendFilein classorg.apache.hadoop.fs.FileSystem- Parameters:
path- file path.- Returns:
- A
DistributedFileSystem.HdfsDataOutputStreamBuilderfor appending a file.
-
hasPathCapability
public boolean hasPathCapability(org.apache.hadoop.fs.Path path, String capability) throws IOException HDFS client capabilities. UsesDfsPathCapabilitiesto keepWebHdfsFileSystemin sync.- Specified by:
hasPathCapabilityin interfaceorg.apache.hadoop.fs.PathCapabilities- Overrides:
hasPathCapabilityin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createMultipartUploader
public org.apache.hadoop.fs.MultipartUploaderBuilder createMultipartUploader(org.apache.hadoop.fs.Path basePath) throws IOException - Overrides:
createMultipartUploaderin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getSlowDatanodeStats
Retrieve stats for slow running datanodes.- Returns:
- An array of slow datanode info.
- Throws:
IOException- If an I/O error occurs.
-
getLocatedBlocks
public LocatedBlocks getLocatedBlocks(org.apache.hadoop.fs.Path p, long start, long len) throws IOException Returns LocatedBlocks of the corresponding HDFS file p from offset start for length len. This is similar togetFileBlockLocations(Path, long, long)except that it returns LocatedBlocks rather than BlockLocation array.- Parameters:
p- path representing the file of interest.start- offsetlen- length- Returns:
- a LocatedBlocks object
- Throws:
IOException
-
getEnclosingRoot
public org.apache.hadoop.fs.Path getEnclosingRoot(org.apache.hadoop.fs.Path path) throws IOException Return path of the enclosing root for a given path The enclosing root path is a common ancestor that should be used for temp and staging dirs as well as within encryption zones and other restricted directories.- Overrides:
getEnclosingRootin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- file path to find the enclosing root path for- Returns:
- a path to the enclosing root
- Throws:
IOException- early checks like failure to resolve path cause IO failures
-
setSafeMode(SafeModeAction).