Interface ClientProtocol
- All Known Implementing Classes:
ClientNamenodeProtocolTranslatorPB
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intstatic final intConstants to index the array of aggregated stats returned bygetStats().static final intstatic final intstatic final intstatic final intstatic final intstatic final intstatic final intDeprecated.static final intstatic final intstatic final longUntil version 69, this class ClientProtocol served as both the client interface to the NN AND the RPC protocol used to communicate with the NN. -
Method Summary
Modifier and TypeMethodDescriptionvoidabandonBlock(ExtendedBlock b, long fileId, String src, String holder) The client can give up on a block by calling abandonBlock().addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().longaddCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) Add a CacheDirective to the CacheManager.voidaddCachePool(CachePoolInfo info) Add a new cache pool.addErasureCodingPolicies(ErasureCodingPolicy[] policies) Add Erasure coding policies to HDFS.voidallowSnapshot(String snapshotRoot) Allow snapshot on a directory.append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) Append to the end of the file.voidcancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Cancel an existing delegation token.voidcheckAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) Checks if the user can access a path.booleancomplete(String src, String clientName, ExtendedBlock last, long fileId) The client is done writing data to the given filename, and would like to complete it.voidMoves blocks from srcs to trg and delete srcs.create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) Create a new file entry in the namespace.voidcreateEncryptionZone(String src, String keyName) Create an encryption zone.createSnapshot(String snapshotRoot, String snapshotName) Create a snapshot.voidcreateSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) Create symlink to a file or directory.booleanDelete the given file or directory from the file system.voiddeleteSnapshot(String snapshotRoot, String snapshotName) Delete a specific snapshot of a snapshottable directory.voiddisableErasureCodingPolicy(String ecPolicyName) Disable erasure coding policy.voiddisallowSnapshot(String snapshotRoot) Disallow snapshot on a directory.voidenableErasureCodingPolicy(String ecPolicyName) Enable erasure coding policy.voidFinalize previous upgrade.voidWrite all metadata for this file into persistent storage.org.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) Gets the ACLs of files and directories.getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) Get a datanode for an existing pipeline.getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) Get a partial listing of the input directoriesgetBlockLocations(String src, long offset, long length) Get locations of the blocks of the specified file within the specified range.org.apache.hadoop.fs.ContentSummarygetContentSummary(String path) GetContentSummaryrooted at the specified directory.longGet the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write.Get a report on the system's current datanodes.Get a report on the current datanode storages.org.apache.hadoop.security.token.Token<DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) Get a valid Delegation Token.Get statistics pertaining to blocks of typeBlockType.STRIPEDin the filesystem.getECTopologyResultForPolicies(String... policyNames) Verifies if the given policies are supported in the given cluster setup.getEditsFromTxid(long txid) Get an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.org.apache.hadoop.fs.PathgetEnclosingRoot(String src) Get the enclosing root for a path.Get the erasure coding codecs loaded in Namenode.Get the erasure coding policies loaded in Namenode, excluding REPLICATION policy.Get the information about the EC policy for the path.getEZForPath(String src) Get the encryption zone for a path.getFileInfo(String src) Get the file info for a specific file or directory.getFileLinkInfo(String src) Get the file info for a specific file or directory.org.apache.hadoop.ha.HAServiceProtocol.HAServiceStateGet HA service state of the server.getLinkTarget(String path) Return the target of the given symlink.getListing(String src, byte[] startAfter, boolean needLocation) Get a partial listing of the indicated directory.getLocatedFileInfo(String src, boolean needBlockToken) Get the file info for a specific file or directory withLocatedBlocks.longgetPreferredBlockSize(String filename) Get the block size for the given file.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(String path) GetQuotaUsagerooted at the specified directory.Get statistics pertaining to blocks of typeBlockType.CONTIGUOUSin the filesystem.org.apache.hadoop.fs.FsServerDefaultsGet server default values for a number of configuration params.Get report on all of the slow Datanodes.getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) Get the difference between two snapshots, or between a snapshot and the current tree of a directory.getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) Get the difference between two snapshots of a directory iteratively.getSnapshotListing(String snapshotRoot) Get listing of all the snapshots for a snapshottable directory.Get the list of snapshottable directories that are owned by the current user.long[]getStats()Get an array of aggregated statistics combining blocks of both typeBlockType.CONTIGUOUSandBlockType.STRIPEDin the filesystem.Get all the available block storage policies.getStoragePolicy(String path) Get the storage policy for a file/directory.Get xattrs of a file or directory.booleanisFileClosed(String src) Get the close status of a file.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry>listCacheDirectives(long prevId, CacheDirectiveInfo filter) List the set of cached paths of a cache pool.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry>listCachePools(String prevPool) List the set of cache pools.listCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone>listEncryptionZones(long prevId) Used to implement cursor-based batched listing ofEncryptionZones.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry>listOpenFiles(long prevId) Deprecated.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry>listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) List open files in the system in batches.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus>listReencryptionStatus(long prevId) Used to implement cursor-based batched listing ofZoneReencryptionStatuss.listXAttrs(String src) List the xattrs names for a file or directory.voidDumps namenode data structures into specified file.booleanCreate a directory (or hierarchy of directories) with the given name and permission.voidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) Modifies ACL entries of files and directories.voidmodifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) Modify a CacheDirective in the CacheManager.voidModify an existing cache pool.voidmsync()Called by client to wait until the server has reached the state id of the client.booleanrecoverLease(String src, String clientName) Start lease recovery.voidreencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) Used to implement re-encryption of encryption zones.voidTells the namenode to reread the hosts and exclude files.voidRemoves all but the base ACL entries of files and directories.voidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) Removes ACL entries from files and directories.voidremoveCacheDirective(long id) Remove a CacheDirectiveInfo from the CacheManager.voidremoveCachePool(String pool) Remove a cache pool.voidremoveDefaultAcl(String src) Removes all default ACL entries from files and directories.voidremoveErasureCodingPolicy(String ecPolicyName) Remove erasure coding policy.voidremoveXAttr(String src, XAttr xAttr) Remove xattr of a file or directory.Value in xAttr parameter is ignored.booleanRename an item in the file system namespace.voidRename src to dst.voidrenameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) Rename a snapshot.longrenewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Renew an existing delegation token.voidrenewLease(String clientName, List<String> namespaces) Client programs can cause stateful changes in the NameNode that affect other clients.voidreportBadBlocks(LocatedBlock[] blocks) The client wants to report corrupted blocks (blocks with specified locations on datanodes).booleanEnable/Disable restore failed storage.longRoll the edit log.Rolling upgrade operations.voidsatisfyStoragePolicy(String path) Satisfy the storage policy for a file/directory.booleansaveNamespace(long timeWindow, long txGap) Save namespace image.voidFully replaces ACL of files and directories, discarding all existing entries.voidsetBalancerBandwidth(long bandwidth) Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.voidsetErasureCodingPolicy(String src, String ecPolicyName) Set an erasure coding policy on a specified path.voidSet Owner of a path (i.e. a file or a directory).voidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) Set permissions for an existing file/directory.voidsetQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) Set the quota for a directory.booleansetReplication(String src, short replication) Set replication for an existing file.booleansetSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) Enter, leave or get safe mode.voidsetStoragePolicy(String src, String policyName) Set the storage policy for a file/directory.voidSets the modification and access time of the file to the specified time.voidSet xattr of a file or directory.booleanTruncate file src to new size.voidUnset erasure coding policy from a specified path.voidunsetStoragePolicy(String src) Unset the storage policy set for a given file or directory.updateBlockForPipeline(ExtendedBlock block, String clientName) Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.voidupdatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] newStorageIDs) Update a pipeline for a block under construction.booleanGet status of upgrade - finalized or not.
-
Field Details
-
versionID
static final long versionIDUntil version 69, this class ClientProtocol served as both the client interface to the NN AND the RPC protocol used to communicate with the NN. This class is used by both the DFSClient and the NN server side to insulate from the protocol serialization. If you are adding/changing this interface then you need to change both this class and ALSO related protocol buffer wire protocol definition in ClientNamenodeProtocol.proto. For more details on protocol buffer wire protocol, please see .../org/apache/hadoop/hdfs/protocolPB/overview.html The log of historical changes can be retrieved from the svn). 69: Eliminate overloaded method names. 69L is the last version id when this class was used for protocols serialization. DO not update this version any further.- See Also:
-
GET_STATS_CAPACITY_IDX
static final int GET_STATS_CAPACITY_IDXConstants to index the array of aggregated stats returned bygetStats().- See Also:
-
GET_STATS_USED_IDX
static final int GET_STATS_USED_IDX- See Also:
-
GET_STATS_REMAINING_IDX
static final int GET_STATS_REMAINING_IDX- See Also:
-
GET_STATS_UNDER_REPLICATED_IDX
Deprecated.UseGET_STATS_LOW_REDUNDANCY_IDXinstead.- See Also:
-
GET_STATS_LOW_REDUNDANCY_IDX
static final int GET_STATS_LOW_REDUNDANCY_IDX- See Also:
-
GET_STATS_CORRUPT_BLOCKS_IDX
static final int GET_STATS_CORRUPT_BLOCKS_IDX- See Also:
-
GET_STATS_MISSING_BLOCKS_IDX
static final int GET_STATS_MISSING_BLOCKS_IDX- See Also:
-
GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX
static final int GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX- See Also:
-
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX
static final int GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX- See Also:
-
GET_STATS_PENDING_DELETION_BLOCKS_IDX
static final int GET_STATS_PENDING_DELETION_BLOCKS_IDX- See Also:
-
STATS_ARRAY_LENGTH
static final int STATS_ARRAY_LENGTH- See Also:
-
-
Method Details
-
getBlockLocations
Get locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.Return
LocatedBlockswhich contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.The client will then have to contact one of the indicated DataNodes to obtain the actual data.
- Parameters:
src- file nameoffset- range start offsetlength- range length- Returns:
- file length and array of blocks with their locations
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcdoes not existorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getServerDefaults
Get server default values for a number of configuration params.- Returns:
- a set of server default configuration values
- Throws:
IOException
-
create
HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException Create a new file entry in the namespace.This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients. Although, other clients cannot
delete(String, boolean), re-create orrename(String, String)it until the file is completed or explicitly as a result of lease expiration.Blocks have a maximum size. Clients that intend to create multi-block files must also use
addBlock(java.lang.String, java.lang.String, org.apache.hadoop.hdfs.protocol.ExtendedBlock, org.apache.hadoop.hdfs.protocol.DatanodeInfo[], long, java.lang.String[], java.util.EnumSet<org.apache.hadoop.hdfs.AddBlockFlag>)- Parameters:
src- path of the file being created.masked- masked permission.clientName- name of the current client.flag- indicates whether the file should be overwritten if it already exists or create if it does not exist or append, or whether the file should be a replicate file, no matter what its ancestor's replication or erasure coding policy is.createParent- create missing parent directory if truereplication- block replication factor.blockSize- maximum block size.supportedVersions- CryptoProtocolVersions supported by the clientecPolicyName- the name of erasure coding policy. A null value means this file will inherit its parent directory's policy, either traditional replication or erasure coding policy. ecPolicyName and SHOULD_REPLICATE CreateFlag are mutually exclusive. It's invalid to set both SHOULD_REPLICATE flag and a non-null ecPolicyName.storagePolicy- the name of the storage policy.- Returns:
- the status of the created file, it could be null if the server doesn't support returning the file status
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedAlreadyBeingCreatedException- if the path does not exist.DSQuotaExceededException- If file creation violates disk space quota restrictionorg.apache.hadoop.fs.FileAlreadyExistsException- If filesrcalready existsFileNotFoundException- If parent ofsrcdoes not exist andcreateParentis falseorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofsrcis not a directory.NSQuotaExceededException- If file creation violates name space quota restrictionSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred RuntimeExceptions:org.apache.hadoop.fs.InvalidPathException- Pathsrcis invalidNote that create with
CreateFlag.OVERWRITEis idempotent.
-
append
LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException Append to the end of the file.- Parameters:
src- path of the file being created.clientName- name of the current client.flag- indicates whether the data is appended to a new block.- Returns:
- wrapper with information about the last partial block and file status if any
- Throws:
org.apache.hadoop.security.AccessControlException- if permission to append file is denied by the system. As usually on the client side the exception will be wrapped intoRemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.org.apache.hadoop.security.AccessControlException- If permission to append to file is deniedFileNotFoundException- If filesrcis not foundDSQuotaExceededException- If append violates disk space quota restrictionSafeModeException- append not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred. RuntimeExceptions:UnsupportedOperationException- if append is not supported
-
setReplication
Set replication for an existing file.The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
- Parameters:
src- file namereplication- new replication- Returns:
- true if successful; false if file does not exist or is a directory
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedDSQuotaExceededException- If replication violates disk space quota restrictionFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
getStoragePolicies
Get all the available block storage policies.- Returns:
- All the in-use block storage policies currently.
- Throws:
IOException
-
setStoragePolicy
Set the storage policy for a file/directory.- Parameters:
src- Path of an existing file/directory.policyName- The name of the storage policy- Throws:
SnapshotAccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundQuotaExceededException- If changes violate the quota restrictionIOException
-
unsetStoragePolicy
Unset the storage policy set for a given file or directory.- Parameters:
src- Path of an existing file/directory.- Throws:
SnapshotAccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundQuotaExceededException- If changes violate the quota restrictionIOException
-
getStoragePolicy
Get the storage policy for a file/directory.- Parameters:
path- Path of an existing file/directory.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundIOException
-
setPermission
void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Set permissions for an existing file/directory.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
setOwner
Set Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.- Parameters:
src- file pathusername- If it is null, the original username remains unchanged.groupname- If it is null, the original groupname remains unchanged.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
abandonBlock
The client can give up on a block by calling abandonBlock(). The client can then either obtain a new block, or complete or abandon the file. Any partial writes to the block will be discarded.- Parameters:
b- Block to abandonfileId- The id of the file where the block resides. Older clients will pass GRANDFATHER_INODE_ID here.src- The path of the file where the block resides.holder- Lease holder.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
addBlock
LocatedBlock addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) throws IOException A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.- Parameters:
src- the file being createdclientName- the name of the client that adds the blockprevious- previous blockexcludeNodes- a list of nodes that should not be allocated for the current blockfileId- the id uniquely identifying a filefavoredNodes- the list of nodes where the client wants the blocks. Nodes are identified by either host name or address.addBlockFlags- flags to advise the behavior of allocating and placing a new block.- Returns:
- LocatedBlock allocated block information.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundNotReplicatedYetException- previous blocks of the file are not replicated yet. Blocks cannot be added until replication completes.SafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getAdditionalDatanode
LocatedBlock getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException Get a datanode for an existing pipeline.- Parameters:
src- the file being writtenfileId- the ID of the file being writtenblk- the block being writtenexistings- the existing nodes in the pipelineexcludes- the excluded nodesnumAdditionalNodes- number of additional datanodesclientName- the name of the client- Returns:
- the located block.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
complete
The client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.- Parameters:
src- the file being createdclientName- the name of the client that adds the blocklast- the last block infofileId- the id uniquely identifying a file- Returns:
- true if all file blocks are minimally replicated or false otherwise
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
reportBadBlocks
The client wants to report corrupted blocks (blocks with specified locations on datanodes).- Parameters:
blocks- Array of located blocks to report- Throws:
IOException
-
rename
Rename an item in the file system namespace.- Parameters:
src- existing file or directory name.dst- new name.- Returns:
- true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
- Throws:
SnapshotAccessControlException- if path is in RO snapshotIOException- an I/O error occurred
-
concat
Moves blocks from srcs to trg and delete srcs.- Parameters:
trg- existing filesrcs- - list of existing files (same block size, same replication)- Throws:
IOException- if some arguments are invalidorg.apache.hadoop.fs.UnresolvedLinkException- iftrgorsrcscontains a symlinkSnapshotAccessControlException- if path is in RO snapshot
-
rename2
void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException Rename src to dst.- Fails if src is a file and dst is a directory.
- Fails if src is a directory and dst is a file.
- Fails if the parent of dst does not exist or is a file.
Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
This implementation of rename is atomic.
- Parameters:
src- existing file or directory name.dst- new name.options- Rename options- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedDSQuotaExceededException- If rename violates disk space quota restrictionorg.apache.hadoop.fs.FileAlreadyExistsException- Ifdstalready exists andoptionshasOptions.Rename.OVERWRITEoption false.FileNotFoundException- Ifsrcdoes not existNSQuotaExceededException- If rename violates namespace quota restrictionorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofdstis not a directorySafeModeException- rename not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrcordstcontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
truncate
Truncate file src to new size.- Fails if src is a directory.
- Fails if src does not exist.
- Fails if src is not closed.
- Fails if new size is greater than current size.
This implementation of truncate is purely a namespace operation if truncate occurs at a block boundary. Requires DataNode block recovery otherwise.
- Parameters:
src- existing filenewLength- the target size- Returns:
- true if client does not need to wait for block recovery, false if client needs to wait for block recovery.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- truncate not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
delete
Delete the given file or directory from the file system.same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
- Parameters:
src- existing namerecursive- if true deletes a non empty directory recursively, else throws an exception.- Returns:
- true only if the existing file or directory was actually removed from the file system.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotorg.apache.hadoop.fs.PathIsNotEmptyDirectoryException- if path is a non-empty directory andrecursiveis set to falseIOException- If an I/O error occurred
-
mkdirs
boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException Create a directory (or hierarchy of directories) with the given name and permission.- Parameters:
src- The path of the directory being createdmasked- The masked permission of the directory being createdcreateParent- create missing parent directory if true- Returns:
- True if the operation success.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedorg.apache.hadoop.fs.FileAlreadyExistsException- Ifsrcalready existsFileNotFoundException- If parent ofsrcdoes not exist andcreateParentis falseNSQuotaExceededException- If file creation violates quota restrictionorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofsrcis not a directorySafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred. RunTimeExceptions:org.apache.hadoop.fs.InvalidPathException- Ifsrcis invalid
-
getListing
Get a partial listing of the indicated directory.- Parameters:
src- the directory namestartAfter- the name to start listing after encoded in java UTF8needLocation- if the FileStatus should contain block locations- Returns:
- a partial listing starting after startAfter
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getBatchedListing
BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException Get a partial listing of the input directories- Parameters:
srcs- the input directoriesstartAfter- the name to start listing after encoded in Java UTF8needLocation- if the FileStatus should contain block locations- Returns:
- a partial listing starting after startAfter. null if the input is empty
- Throws:
IOException- if an I/O error occurred
-
getSnapshottableDirListing
Get the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.- Returns:
- The list of all the current snapshottable directories.
- Throws:
IOException- If an I/O error occurred.
-
getSnapshotListing
Get listing of all the snapshots for a snapshottable directory.- Returns:
- Information about all the snapshots for a snapshottable directory
- Throws:
IOException- If an I/O error occurred
-
renewLease
Client programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
- Parameters:
namespaces- The full Namespace list that the renewLease rpc should be forwarded by RBF. Tips: NN side, this value should be null. RBF side, if this value is null, this rpc will be forwarded to all available namespaces, else this rpc will be forwarded to the special namespaces.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedIOException- If an I/O error occurred
-
recoverLease
Start lease recovery. Lightweight NameNode operation to trigger lease recovery- Parameters:
src- path of the file to start lease recoveryclientName- name of the current client- Returns:
- true if the file is already closed
- Throws:
IOException
-
getStats
Get an array of aggregated statistics combining blocks of both typeBlockType.CONTIGUOUSandBlockType.STRIPEDin the filesystem. Use public constants likeGET_STATS_CAPACITY_IDXin place of actual numbers to index into the array.- [0] contains the total storage capacity of the system, in bytes.
- [1] contains the total used space of the system, in bytes.
- [2] contains the available storage of the system, in bytes.
- [3] contains number of low redundancy blocks in the system.
- [4] contains number of corrupt blocks.
- [5] contains number of blocks without any good replicas left.
- [6] contains number of blocks which have replication factor 1 and have lost the only replica.
- [7] contains number of bytes that are at risk for deletion.
- [8] contains number of pending deletion blocks.
- Throws:
IOException
-
getReplicatedBlockStats
Get statistics pertaining to blocks of typeBlockType.CONTIGUOUSin the filesystem.- Throws:
IOException
-
getECBlockGroupStats
Get statistics pertaining to blocks of typeBlockType.STRIPEDin the filesystem.- Throws:
IOException
-
getDatanodeReport
Get a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.- Throws:
IOException
-
getDatanodeStorageReport
DatanodeStorageReport[] getDatanodeStorageReport(HdfsConstants.DatanodeReportType type) throws IOException Get a report on the current datanode storages.- Throws:
IOException
-
getPreferredBlockSize
Get the block size for the given file.- Parameters:
filename- The name of the file- Returns:
- The number of bytes in each block
- Throws:
IOExceptionorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.
-
setSafeMode
Enter, leave or get safe mode.Safe mode is a name node state when it
- does not accept changes to name space (read-only), and
- does not replicate or delete blocks.
Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER,false).At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least
dfs.namenode.replication.minreplicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER,false)then the name node stays in safe mode until it is manually turned off usingsetSafeMode(SafeModeAction.SAFEMODE_LEAVE,false). Current state of the name node can be verified usingsetSafeMode(SafeModeAction.SAFEMODE_GET,false)Configuration parameters:
dfs.safemode.threshold.pctis the threshold parameter.
dfs.safemode.extensionis the safe mode extension parameter.
dfs.namenode.replication.minis the minimal replication parameter.Special cases:
The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
If the threshold is set to 1 then all blocks need to have at least minimal replication.
If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
Safe mode can always be turned off manually.- Parameters:
action-- 0 leave safe mode;
- 1 enter safe mode;
- 2 get safe mode state.
isChecked- If true then action will be done only in ActiveNN.- Returns:
- 0 if the safe mode is OFF or
- 1 if the safe mode is ON.
- Throws:
IOException
-
saveNamespace
Save namespace image.Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.
- Parameters:
timeWindow- NameNode does a checkpoint if the latest checkpoint was done beyond the given time period (in seconds).txGap- NameNode does a checkpoint if the gap between the latest checkpoint and the latest transaction id is greater this gap.- Returns:
- whether an extra checkpoint has been done
- Throws:
IOException- if image creation failed.
-
rollEdits
Roll the edit log. Requires superuser privileges.- Returns:
- the txid of the new segment
- Throws:
org.apache.hadoop.security.AccessControlException- if the superuser privilege is violatedIOException- if log roll fails
-
restoreFailedStorage
Enable/Disable restore failed storage.sets flag to enable restore of failed storage replicas
- Throws:
org.apache.hadoop.security.AccessControlException- if the superuser privilege is violated.IOException
-
refreshNodes
Tells the namenode to reread the hosts and exclude files.- Throws:
IOException
-
finalizeUpgrade
Finalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.- Throws:
IOException
-
upgradeStatus
Get status of upgrade - finalized or not.- Returns:
- true if upgrade is finalized or if no upgrade is in progress and false otherwise.
- Throws:
IOException
-
rollingUpgrade
Rolling upgrade operations.- Parameters:
action- either query, prepare or finalize.- Returns:
- rolling upgrade information. On query, if no upgrade is in progress, returns null.
- Throws:
IOException
-
listCorruptFileBlocks
- Returns:
- CorruptFileBlocks, containing a list of corrupt files (with duplicates if there is more than one corrupt block in a file) and a cookie
- Throws:
IOException- Each call returns a subset of the corrupt files in the system. To obtain all corrupt files, call this method repeatedly and each time pass in the cookie returned from the previous call.
-
metaSave
Dumps namenode data structures into specified file. If the file already exists, then append.- Throws:
IOException
-
setBalancerBandwidth
Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.- Parameters:
bandwidth- Blanacer bandwidth in bytes per second for this datanode.- Throws:
IOException
-
getFileInfo
Get the file info for a specific file or directory.- Parameters:
src- The string representation of the path to the file- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.IOException- If an I/O error occurred
-
isFileClosed
Get the close status of a file.- Parameters:
src- The string representation of the path to the file- Returns:
- return true if file is closed
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.IOException- If an I/O error occurred
-
getFileLinkInfo
Get the file info for a specific file or directory. If the path refers to a symlink then the FileStatus of the symlink is returned.- Parameters:
src- The string representation of the path to the file- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkIOException- If an I/O error occurred
-
getLocatedFileInfo
Get the file info for a specific file or directory withLocatedBlocks.- Parameters:
src- The string representation of the path to the fileneedBlockToken- Generate block tokens forLocatedBlocks- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundIOException- If an I/O error occurred
-
getContentSummary
GetContentSummaryrooted at the specified directory.- Parameters:
path- The string representation of the path- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifpathcontains a symlink.IOException- If an I/O error occurred
-
setQuota
void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException Set the quota for a directory.- Parameters:
path- The string representation of the path to the directorynamespaceQuota- Limit on the number of names in the tree rooted at the directorystoragespaceQuota- Limit on storage space occupied all the files under this directory.type- StorageType that the space quota is intended to be set on. It may be null when called by traditional space/namespace quota. When type is is not null, the storagespaceQuota parameter is for type specified and namespaceQuota must beHdfsConstants.QUOTA_DONT_SET.
The quota can have three types of values : (1) 0 or more will set the quota to that value, (2)HdfsConstants.QUOTA_DONT_SETimplies the quota will not be changed, and (3)HdfsConstants.QUOTA_RESETimplies the quota will be reset. Any other value is a runtime error.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundQuotaExceededException- if the directory size is greater than the given quotaorg.apache.hadoop.fs.UnresolvedLinkException- if thepathcontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
fsync
Write all metadata for this file into persistent storage. The file must be currently open for writing.- Parameters:
src- The string representation of the pathinodeId- The inode ID, or GRANDFATHER_INODE_ID if the client is too old to support fsync with inode IDs.client- The string representation of the clientlastBlockLength- The length of the last block (under construction) to be reported to NameNode- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.IOException- If an I/O error occurred
-
setTimes
Sets the modification and access time of the file to the specified time.- Parameters:
src- The string representation of the pathmtime- The number of milliseconds since Jan 1, 1970. Setting negative mtime means that modification time should not be set by this call.atime- The number of milliseconds since Jan 1, 1970. Setting negative atime means that access time should not be set by this call.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
createSymlink
void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws IOException Create symlink to a file or directory.- Parameters:
target- The path of the destination that the link points to.link- The path of the link being created.dirPerm- permissions to use when creating parent directoriescreateParent- - if true then missing parent dirs are created if false then parent must exist- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedorg.apache.hadoop.fs.FileAlreadyExistsException- If filelinkalready existsFileNotFoundException- If parent oflinkdoes not exist andcreateParentis falseorg.apache.hadoop.fs.ParentNotDirectoryException- If parent oflinkis not a directory.org.apache.hadoop.fs.UnresolvedLinkException- iflinkcontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
getLinkTarget
Return the target of the given symlink. If there is an intermediate symlink in the path (ie a symlink leading up to the final path component) then the given path is returned with this symlink resolved.- Parameters:
path- The path with a link that needs resolution.- Returns:
- The path after resolving the first symbolic link in the path.
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- Ifpathdoes not existIOException- If the given path does not refer to a symlink or an I/O error occurred
-
updateBlockForPipeline
Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.- Parameters:
block- a blockclientName- the name of the client- Returns:
- a located block with a new generation stamp and an access token
- Throws:
IOException- if any error occurs
-
updatePipeline
void updatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] newStorageIDs) throws IOException Update a pipeline for a block under construction.- Parameters:
clientName- the name of the clientoldBlock- the old blocknewBlock- the new block containing new generation stamp and lengthnewNodes- datanodes in the pipeline- Throws:
IOException- if any error occurs
-
getDelegationToken
org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException Get a valid Delegation Token.- Parameters:
renewer- the designated renewer for the token- Throws:
IOException
-
renewDelegationToken
long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Renew an existing delegation token.- Parameters:
token- delegation token obtained earlier- Returns:
- the new expiration time
- Throws:
IOException
-
cancelDelegationToken
void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Cancel an existing delegation token.- Parameters:
token- delegation token- Throws:
IOException
-
getDataEncryptionKey
- Returns:
- encryption key so a client can encrypt data sent via the DataTransferProtocol to/from DataNodes.
- Throws:
IOException
-
createSnapshot
Create a snapshot.- Parameters:
snapshotRoot- the path that is being snapshottedsnapshotName- name of the snapshot created- Returns:
- the snapshot path.
- Throws:
IOException
-
deleteSnapshot
Delete a specific snapshot of a snapshottable directory.- Parameters:
snapshotRoot- The snapshottable directorysnapshotName- Name of the snapshot for the snapshottable directory- Throws:
IOException
-
renameSnapshot
void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException Rename a snapshot.- Parameters:
snapshotRoot- the directory path where the snapshot was takensnapshotOldName- old name of the snapshotsnapshotNewName- new name of the snapshot- Throws:
IOException
-
allowSnapshot
Allow snapshot on a directory.- Parameters:
snapshotRoot- the directory to be snapped- Throws:
IOException- on error
-
disallowSnapshot
Disallow snapshot on a directory.- Parameters:
snapshotRoot- the directory to disallow snapshot- Throws:
IOException- on error
-
getSnapshotDiffReport
SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) throws IOException Get the difference between two snapshots, or between a snapshot and the current tree of a directory.- Parameters:
snapshotRoot- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.- Returns:
- The difference report represented as a
SnapshotDiffReport. - Throws:
IOException- on error
-
getSnapshotDiffReportListing
SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) throws IOException Get the difference between two snapshots of a directory iteratively.- Parameters:
snapshotRoot- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.startPath- path relative to the snapshottable root directory from where the snapshotdiff computation needs to start across multiple rpc callsindex- index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call as the no of entries exceeded the snapshotdiffentry limit. -1 indicates, the snapshotdiff compuatation needs to start right from the startPath provided.- Returns:
- The difference report represented as a
SnapshotDiffReport. - Throws:
IOException- on error
-
addCacheDirective
Add a CacheDirective to the CacheManager.- Parameters:
directive- A CacheDirectiveInfo to be addedflags-CacheFlags to use for this operation.- Returns:
- A CacheDirectiveInfo associated with the added directive
- Throws:
IOException- if the directive could not be added
-
modifyCacheDirective
void modifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException Modify a CacheDirective in the CacheManager.- Parameters:
flags-CacheFlags to use for this operation.- Throws:
IOException- if the directive could not be modified
-
removeCacheDirective
Remove a CacheDirectiveInfo from the CacheManager.- Parameters:
id- of a CacheDirectiveInfo- Throws:
IOException- if the cache directive could not be removed
-
listCacheDirectives
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry> listCacheDirectives(long prevId, CacheDirectiveInfo filter) throws IOException List the set of cached paths of a cache pool. Incrementally fetches results from the server.- Parameters:
prevId- The last listed entry ID, or -1 if this is the first call to listCacheDirectives.filter- Parameters to use to filter the list results, or null to display all directives visible to us.- Returns:
- A batch of CacheDirectiveEntry objects.
- Throws:
IOException
-
addCachePool
Add a new cache pool.- Parameters:
info- Description of the new cache pool- Throws:
IOException- If the request could not be completed.
-
modifyCachePool
Modify an existing cache pool.- Parameters:
req- The request to modify a cache pool.- Throws:
IOException- If the request could not be completed.
-
removeCachePool
Remove a cache pool.- Parameters:
pool- name of the cache pool to remove.- Throws:
IOException- if the cache pool did not exist, or could not be removed.
-
listCachePools
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry> listCachePools(String prevPool) throws IOException List the set of cache pools. Incrementally fetches results from the server.- Parameters:
prevPool- name of the last pool listed, or the empty string if this is the first invocation of listCachePools- Returns:
- A batch of CachePoolEntry objects.
- Throws:
IOException
-
modifyAclEntries
void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Modifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)- Throws:
IOException
-
removeAclEntries
void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Removes ACL entries from files and directories. Other ACL entries are retained.- Throws:
IOException
-
removeDefaultAcl
Removes all default ACL entries from files and directories.- Throws:
IOException
-
removeAcl
Removes all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.- Throws:
IOException
-
setAcl
Fully replaces ACL of files and directories, discarding all existing entries.- Throws:
IOException
-
getAclStatus
Gets the ACLs of files and directories.- Throws:
IOException
-
createEncryptionZone
Create an encryption zone.- Throws:
IOException
-
getEZForPath
Get the encryption zone for a path.- Throws:
IOException
-
listEncryptionZones
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone> listEncryptionZones(long prevId) throws IOException Used to implement cursor-based batched listing ofEncryptionZones.- Parameters:
prevId- ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.- Returns:
- Batch of encryption zones.
- Throws:
IOException
-
reencryptEncryptionZone
Used to implement re-encryption of encryption zones.- Parameters:
zone- the encryption zone to re-encrypt.action- the action for the re-encryption.- Throws:
IOException
-
listReencryptionStatus
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException Used to implement cursor-based batched listing ofZoneReencryptionStatuss.- Parameters:
prevId- ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.- Returns:
- Batch of encryption zones.
- Throws:
IOException
-
setXAttr
void setXAttr(String src, XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException Set xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
src- file or directoryxAttr-XAttrto setflag- set flag- Throws:
IOException
-
getXAttrs
Get xattrs of a file or directory. Values in xAttrs parameter are ignored. If xAttrs is null or empty, this is the same as getting all xattrs of the file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
src- file or directoryxAttrs- xAttrs to get- Returns:
XAttrlist- Throws:
IOException
-
listXAttrs
List the xattrs names for a file or directory. Only the xattr names for which the logged in user has the permissions to access will be returned.Refer to the HDFS extended attributes user documentation for details.
- Parameters:
src- file or directory- Returns:
XAttrlist- Throws:
IOException
-
removeXAttr
Remove xattr of a file or directory.Value in xAttr parameter is ignored. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Parameters:
src- file or directoryxAttr-XAttrto remove- Throws:
IOException
-
checkAccess
Checks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws anAccessControlException. In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns.- Parameters:
path- Path to checkmode- type of access to check- Throws:
org.apache.hadoop.security.AccessControlException- if access is deniedFileNotFoundException- if the path does not existIOException- see specific implementation
-
getCurrentEditLogTxid
Get the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write. Used as the starting point for the inotify event stream.- Throws:
IOException
-
getEditsFromTxid
Get an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.- Throws:
IOException
-
setErasureCodingPolicy
Set an erasure coding policy on a specified path.- Parameters:
src- The path to set policy on.ecPolicyName- The erasure coding policy name.- Throws:
IOException
-
addErasureCodingPolicies
AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException Add Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response.- Parameters:
policies- The user defined ec policy list to add.- Returns:
- Return the response list of adding operations.
- Throws:
IOException
-
removeErasureCodingPolicy
Remove erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be removed.- Throws:
IOException
-
enableErasureCodingPolicy
Enable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be enabled.- Throws:
IOException
-
disableErasureCodingPolicy
Disable erasure coding policy.- Parameters:
ecPolicyName- The name of the policy to be disabled.- Throws:
IOException
-
getErasureCodingPolicies
Get the erasure coding policies loaded in Namenode, excluding REPLICATION policy.- Throws:
IOException
-
getErasureCodingCodecs
Get the erasure coding codecs loaded in Namenode.- Throws:
IOException
-
getErasureCodingPolicy
Get the information about the EC policy for the path. Null will be returned if directory or file has REPLICATION policy.- Parameters:
src- path to get the info for- Throws:
IOException
-
unsetErasureCodingPolicy
Unset erasure coding policy from a specified path.- Parameters:
src- The path to unset policy.- Throws:
IOException
-
getECTopologyResultForPolicies
Verifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.- Parameters:
policyNames- name of policies.- Returns:
- the result if the given policies are supported in the cluster setup
- Throws:
IOException
-
getQuotaUsage
GetQuotaUsagerooted at the specified directory. Note: due to HDFS-6763, standby/observer doesn't keep up-to-date info about quota usage, and thus even though this is ReadOnly, it can only be directed to the active namenode.- Parameters:
path- The string representation of the path- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifpathcontains a symlink.IOException- If an I/O error occurred
-
listOpenFiles
@Deprecated org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId) throws IOException Deprecated.List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.- Parameters:
prevId- the cursor INode id.- Throws:
IOException
-
listOpenFiles
org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException List open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.- Parameters:
prevId- the cursor INode id.openFilesTypes- types to filter the open files.path- path to filter the open files.- Throws:
IOException
-
getHAServiceState
Get HA service state of the server.- Returns:
- server HA state
- Throws:
IOException
-
msync
Called by client to wait until the server has reached the state id of the client. The client and server state id are given by client side and server side alignment context respectively. This can be a blocking call.- Throws:
IOException
-
satisfyStoragePolicy
Satisfy the storage policy for a file/directory.- Parameters:
path- Path of an existing file/directory.- Throws:
org.apache.hadoop.security.AccessControlException- If access is denied.org.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.FileNotFoundException- If file/dirsrcis not found.SafeModeException- append not allowed in safemode.IOException
-
getSlowDatanodeReport
Get report on all of the slow Datanodes. Slow running datanodes are identified based on the Outlier detection algorithm, if slow peer tracking is enabled for the DFS cluster.- Returns:
- Datanode report for slow running datanodes.
- Throws:
IOException- If an I/O error occurs.
-
getEnclosingRoot
Get the enclosing root for a path.- Throws:
IOException
-