Class ClientNamenodeProtocolTranslatorPB
- All Implemented Interfaces:
Closeable,AutoCloseable,ClientProtocol,org.apache.hadoop.ipc.ProtocolMetaInterface,org.apache.hadoop.ipc.ProtocolTranslator
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionprotected static classprotected static class -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.FinalizeUpgradeRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetDataEncryptionKeyRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingCodecsRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingPoliciesRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsECBlockGroupStatsRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsReplicatedBlockStatsRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsStatusRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetServerDefaultsRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetStoragePoliciesRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RefreshNodesRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RollEditsRequestProtoprotected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.UpgradeStatusRequestProtoFields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX, GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_LOW_REDUNDANCY_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX, GET_STATS_PENDING_DELETION_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, STATS_ARRAY_LENGTH, versionID -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidabandonBlock(ExtendedBlock b, long fileId, String src, String holder) The client can give up on a block by calling abandonBlock().addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().longaddCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) Add a CacheDirective to the CacheManager.voidaddCachePool(CachePoolInfo info) Add a new cache pool.addErasureCodingPolicies(ErasureCodingPolicy[] policies) Add Erasure coding policies to HDFS.voidallowSnapshot(String snapshotRoot) Allow snapshot on a directory.append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) Append to the end of the file.voidcancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Cancel an existing delegation token.voidcheckAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) Checks if the user can access a path.voidclose()booleancomplete(String src, String clientName, ExtendedBlock last, long fileId) The client is done writing data to the given filename, and would like to complete it.voidMoves blocks from srcs to trg and delete srcs.create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) Create a new file entry in the namespace.voidcreateEncryptionZone(String src, String keyName) Create an encryption zone.createSnapshot(String snapshotRoot, String snapshotName) Create a snapshot.voidcreateSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) Create symlink to a file or directory.booleanDelete the given file or directory from the file system.voiddeleteSnapshot(String snapshotRoot, String snapshotName) Delete a specific snapshot of a snapshottable directory.voiddisableErasureCodingPolicy(String ecPolicyName) Disable erasure coding policy.voiddisallowSnapshot(String snapshotRoot) Disallow snapshot on a directory.voidenableErasureCodingPolicy(String ecPolicyName) Enable erasure coding policy.voidFinalize previous upgrade.voidWrite all metadata for this file into persistent storage.org.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) Gets the ACLs of files and directories.getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) Get a datanode for an existing pipeline.getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) Get a partial listing of the input directoriesgetBlockLocations(String src, long offset, long length) Get locations of the blocks of the specified file within the specified range.org.apache.hadoop.fs.ContentSummarygetContentSummary(String path) GetContentSummaryrooted at the specified directory.longGet the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write.Get a report on the system's current datanodes.Get a report on the current datanode storages.org.apache.hadoop.security.token.Token<DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) Get a valid Delegation Token.Get statistics pertaining to blocks of typeBlockType.STRIPEDin the filesystem.getECTopologyResultForPolicies(String... policyNames) Verifies if the given policies are supported in the given cluster setup.getEditsFromTxid(long txid) Get an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.org.apache.hadoop.fs.PathgetEnclosingRoot(String filename) Get the enclosing root for a path.Get the erasure coding codecs loaded in Namenode.Get the erasure coding policies loaded in Namenode, excluding REPLICATION policy.Get the information about the EC policy for the path.getEZForPath(String src) Get the encryption zone for a path.getFileInfo(String src) Get the file info for a specific file or directory.getFileLinkInfo(String src) Get the file info for a specific file or directory.org.apache.hadoop.ha.HAServiceProtocol.HAServiceStateGet HA service state of the server.getLinkTarget(String path) Return the target of the given symlink.getListing(String src, byte[] startAfter, boolean needLocation) Get a partial listing of the indicated directory.getLocatedFileInfo(String src, boolean needBlockToken) Get the file info for a specific file or directory withLocatedBlocks.longgetPreferredBlockSize(String filename) Get the block size for the given file.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(String path) GetQuotaUsagerooted at the specified directory.Get statistics pertaining to blocks of typeBlockType.CONTIGUOUSin the filesystem.org.apache.hadoop.fs.FsServerDefaultsGet server default values for a number of configuration params.Get report on all of the slow Datanodes.getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) Get the difference between two snapshots, or between a snapshot and the current tree of a directory.getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) Get the difference between two snapshots of a directory iteratively.getSnapshotListing(String path) Get listing of all the snapshots for a snapshottable directory.Get the list of snapshottable directories that are owned by the current user.long[]getStats()Get an array of aggregated statistics combining blocks of both typeBlockType.CONTIGUOUSandBlockType.STRIPEDin the filesystem.Get all the available block storage policies.getStoragePolicy(String path) Get the storage policy for a file/directory.Get xattrs of a file or directory.booleanisFileClosed(String src) Get the close status of a file.booleanisMethodSupported(String methodName) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry>listCacheDirectives(long prevId, CacheDirectiveInfo filter) List the set of cached paths of a cache pool.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry>listCachePools(String prevKey) List the set of cache pools.listCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone>listEncryptionZones(long id) Used to implement cursor-based batched listing ofEncryptionZones.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry>listOpenFiles(long prevId) Deprecated.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry>listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) List open files in the system in batches.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus>listReencryptionStatus(long id) Used to implement cursor-based batched listing ofZoneReencryptionStatuss.listXAttrs(String src) List the xattrs names for a file or directory.voidDumps namenode data structures into specified file.booleanCreate a directory (or hierarchy of directories) with the given name and permission.voidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) Modifies ACL entries of files and directories.voidmodifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) Modify a CacheDirective in the CacheManager.voidModify an existing cache pool.voidmsync()Called by client to wait until the server has reached the state id of the client.booleanrecoverLease(String src, String clientName) Start lease recovery.voidreencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) Used to implement re-encryption of encryption zones.voidTells the namenode to reread the hosts and exclude files.voidRemoves all but the base ACL entries of files and directories.voidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) Removes ACL entries from files and directories.voidremoveCacheDirective(long id) Remove a CacheDirectiveInfo from the CacheManager.voidremoveCachePool(String cachePoolName) Remove a cache pool.voidremoveDefaultAcl(String src) Removes all default ACL entries from files and directories.voidremoveErasureCodingPolicy(String ecPolicyName) Remove erasure coding policy.voidremoveXAttr(String src, XAttr xAttr) Remove xattr of a file or directory.Value in xAttr parameter is ignored.booleanRename an item in the file system namespace.voidRename src to dst.voidrenameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) Rename a snapshot.longrenewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) Renew an existing delegation token.voidrenewLease(String clientName, List<String> namespaces) Client programs can cause stateful changes in the NameNode that affect other clients.voidreportBadBlocks(LocatedBlock[] blocks) The client wants to report corrupted blocks (blocks with specified locations on datanodes).booleanEnable/Disable restore failed storage.longRoll the edit log.Rolling upgrade operations.voidSatisfy the storage policy for a file/directory.booleansaveNamespace(long timeWindow, long txGap) Save namespace image.voidFully replaces ACL of files and directories, discarding all existing entries.voidsetBalancerBandwidth(long bandwidth) Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.voidsetErasureCodingPolicy(String src, String ecPolicyName) Set an erasure coding policy on a specified path.voidSet Owner of a path (i.e. a file or a directory).voidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) Set permissions for an existing file/directory.voidsetQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) Set the quota for a directory.booleansetReplication(String src, short replication) Set replication for an existing file.booleansetSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) Enter, leave or get safe mode.voidsetStoragePolicy(String src, String policyName) Set the storage policy for a file/directory.voidSets the modification and access time of the file to the specified time.voidSet xattr of a file or directory.booleanTruncate file src to new size.voidUnset erasure coding policy from a specified path.voidunsetStoragePolicy(String src) Unset the storage policy set for a given file or directory.updateBlockForPipeline(ExtendedBlock block, String clientName) Get a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.voidupdatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] storageIDs) Update a pipeline for a block under construction.booleanGet status of upgrade - finalized or not.
-
Field Details
-
VOID_GET_SERVER_DEFAULT_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetServerDefaultsRequestProto VOID_GET_SERVER_DEFAULT_REQUEST -
VOID_GET_FSSTATUS_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsStatusRequestProto VOID_GET_FSSTATUS_REQUEST -
VOID_GET_FS_REPLICATED_BLOCK_STATS_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsReplicatedBlockStatsRequestProto VOID_GET_FS_REPLICATED_BLOCK_STATS_REQUEST -
VOID_GET_FS_ECBLOCKGROUP_STATS_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetFsECBlockGroupStatsRequestProto VOID_GET_FS_ECBLOCKGROUP_STATS_REQUEST -
VOID_ROLLEDITS_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RollEditsRequestProto VOID_ROLLEDITS_REQUEST -
VOID_REFRESH_NODES_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.RefreshNodesRequestProto VOID_REFRESH_NODES_REQUEST -
VOID_FINALIZE_UPGRADE_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.FinalizeUpgradeRequestProto VOID_FINALIZE_UPGRADE_REQUEST -
VOID_UPGRADE_STATUS_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.UpgradeStatusRequestProto VOID_UPGRADE_STATUS_REQUEST -
VOID_GET_DATA_ENCRYPTIONKEY_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetDataEncryptionKeyRequestProto VOID_GET_DATA_ENCRYPTIONKEY_REQUEST -
VOID_GET_STORAGE_POLICIES_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos.GetStoragePoliciesRequestProto VOID_GET_STORAGE_POLICIES_REQUEST -
VOID_GET_EC_POLICIES_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingPoliciesRequestProto VOID_GET_EC_POLICIES_REQUEST -
VOID_GET_EC_CODEC_REQUEST
protected static final org.apache.hadoop.hdfs.protocol.proto.ErasureCodingProtos.GetErasureCodingCodecsRequestProto VOID_GET_EC_CODEC_REQUEST
-
-
Constructor Details
-
ClientNamenodeProtocolTranslatorPB
-
-
Method Details
-
close
public void close()- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable
-
getBlockLocations
Description copied from interface:ClientProtocolGet locations of the blocks of the specified file within the specified range. DataNode locations for each block are sorted by the proximity to the client.Return
LocatedBlockswhich contains file length, blocks and their locations. DataNode locations for each block are sorted by the distance to the client's address.The client will then have to contact one of the indicated DataNodes to obtain the actual data.
- Specified by:
getBlockLocationsin interfaceClientProtocol- Parameters:
src- file nameoffset- range start offsetlength- range length- Returns:
- file length and array of blocks with their locations
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcdoes not existorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getServerDefaults
Description copied from interface:ClientProtocolGet server default values for a number of configuration params.- Specified by:
getServerDefaultsin interfaceClientProtocol- Returns:
- a set of server default configuration values
- Throws:
IOException
-
create
public HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException Description copied from interface:ClientProtocolCreate a new file entry in the namespace.This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients. Although, other clients cannot
ClientProtocol.delete(String, boolean), re-create orClientProtocol.rename(String, String)it until the file is completed or explicitly as a result of lease expiration.Blocks have a maximum size. Clients that intend to create multi-block files must also use
ClientProtocol.addBlock(java.lang.String, java.lang.String, org.apache.hadoop.hdfs.protocol.ExtendedBlock, org.apache.hadoop.hdfs.protocol.DatanodeInfo[], long, java.lang.String[], java.util.EnumSet<org.apache.hadoop.hdfs.AddBlockFlag>)- Specified by:
createin interfaceClientProtocol- Parameters:
src- path of the file being created.masked- masked permission.clientName- name of the current client.flag- indicates whether the file should be overwritten if it already exists or create if it does not exist or append, or whether the file should be a replicate file, no matter what its ancestor's replication or erasure coding policy is.createParent- create missing parent directory if truereplication- block replication factor.blockSize- maximum block size.supportedVersions- CryptoProtocolVersions supported by the clientecPolicyName- the name of erasure coding policy. A null value means this file will inherit its parent directory's policy, either traditional replication or erasure coding policy. ecPolicyName and SHOULD_REPLICATE CreateFlag are mutually exclusive. It's invalid to set both SHOULD_REPLICATE flag and a non-null ecPolicyName.storagePolicy- the name of the storage policy.- Returns:
- the status of the created file, it could be null if the server doesn't support returning the file status
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedAlreadyBeingCreatedException- if the path does not exist.DSQuotaExceededException- If file creation violates disk space quota restrictionorg.apache.hadoop.fs.FileAlreadyExistsException- If filesrcalready existsFileNotFoundException- If parent ofsrcdoes not exist andcreateParentis falseorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofsrcis not a directory.NSQuotaExceededException- If file creation violates name space quota restrictionSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred RuntimeExceptions:
-
truncate
Description copied from interface:ClientProtocolTruncate file src to new size.- Fails if src is a directory.
- Fails if src does not exist.
- Fails if src is not closed.
- Fails if new size is greater than current size.
This implementation of truncate is purely a namespace operation if truncate occurs at a block boundary. Requires DataNode block recovery otherwise.
- Specified by:
truncatein interfaceClientProtocol- Parameters:
src- existing filenewLength- the target size- Returns:
- true if client does not need to wait for block recovery, false if client needs to wait for block recovery.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- truncate not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
append
public LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException Description copied from interface:ClientProtocolAppend to the end of the file.- Specified by:
appendin interfaceClientProtocol- Parameters:
src- path of the file being created.clientName- name of the current client.flag- indicates whether the data is appended to a new block.- Returns:
- wrapper with information about the last partial block and file status if any
- Throws:
org.apache.hadoop.security.AccessControlException- if permission to append file is denied by the system. As usually on the client side the exception will be wrapped intoRemoteException. Allows appending to an existing file if the server is configured with the parameter dfs.support.append set to true, otherwise throws an IOException.FileNotFoundException- If filesrcis not foundDSQuotaExceededException- If append violates disk space quota restrictionSafeModeException- append not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred. RuntimeExceptions:
-
setReplication
Description copied from interface:ClientProtocolSet replication for an existing file.The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
- Specified by:
setReplicationin interfaceClientProtocol- Parameters:
src- file namereplication- new replication- Returns:
- true if successful; false if file does not exist or is a directory
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedDSQuotaExceededException- If replication violates disk space quota restrictionFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
setPermission
public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Description copied from interface:ClientProtocolSet permissions for an existing file/directory.- Specified by:
setPermissionin interfaceClientProtocol- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
setOwner
Description copied from interface:ClientProtocolSet Owner of a path (i.e. a file or a directory). The parameters username and groupname cannot both be null.- Specified by:
setOwnerin interfaceClientProtocol- Parameters:
src- file pathusername- If it is null, the original username remains unchanged.groupname- If it is null, the original groupname remains unchanged.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
abandonBlock
public void abandonBlock(ExtendedBlock b, long fileId, String src, String holder) throws IOException Description copied from interface:ClientProtocolThe client can give up on a block by calling abandonBlock(). The client can then either obtain a new block, or complete or abandon the file. Any partial writes to the block will be discarded.- Specified by:
abandonBlockin interfaceClientProtocol- Parameters:
b- Block to abandonfileId- The id of the file where the block resides. Older clients will pass GRANDFATHER_INODE_ID here.src- The path of the file where the block resides.holder- Lease holder.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
addBlock
public LocatedBlock addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) throws IOException Description copied from interface:ClientProtocolA client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). addBlock() allocates a new block and datanodes the block data should be replicated to. addBlock() also commits the previous block by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes.- Specified by:
addBlockin interfaceClientProtocol- Parameters:
src- the file being createdclientName- the name of the client that adds the blockprevious- previous blockexcludeNodes- a list of nodes that should not be allocated for the current blockfileId- the id uniquely identifying a filefavoredNodes- the list of nodes where the client wants the blocks. Nodes are identified by either host name or address.addBlockFlags- flags to advise the behavior of allocating and placing a new block.- Returns:
- LocatedBlock allocated block information.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundNotReplicatedYetException- previous blocks of the file are not replicated yet. Blocks cannot be added until replication completes.SafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getAdditionalDatanode
public LocatedBlock getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException Description copied from interface:ClientProtocolGet a datanode for an existing pipeline.- Specified by:
getAdditionalDatanodein interfaceClientProtocol- Parameters:
src- the file being writtenfileId- the ID of the file being writtenblk- the block being writtenexistings- the existing nodes in the pipelineexcludes- the excluded nodesnumAdditionalNodes- number of additional datanodesclientName- the name of the client- Returns:
- the located block.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
complete
public boolean complete(String src, String clientName, ExtendedBlock last, long fileId) throws IOException Description copied from interface:ClientProtocolThe client is done writing data to the given filename, and would like to complete it. The function returns whether the file has been closed successfully. If the function returns false, the caller should try again. close() also commits the last block of file by reporting to the name-node the actual generation stamp and the length of the block that the client has transmitted to data-nodes. A call to complete() will not return true until all the file's blocks have been replicated the minimum number of times. Thus, DataNode failures may cause a client to call complete() several times before succeeding.- Specified by:
completein interfaceClientProtocol- Parameters:
src- the file being createdclientName- the name of the client that adds the blocklast- the last block infofileId- the id uniquely identifying a file- Returns:
- true if all file blocks are minimally replicated or false otherwise
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
reportBadBlocks
Description copied from interface:ClientProtocolThe client wants to report corrupted blocks (blocks with specified locations on datanodes).- Specified by:
reportBadBlocksin interfaceClientProtocol- Parameters:
blocks- Array of located blocks to report- Throws:
IOException
-
rename
Description copied from interface:ClientProtocolRename an item in the file system namespace.- Specified by:
renamein interfaceClientProtocol- Parameters:
src- existing file or directory name.dst- new name.- Returns:
- true if successful, or false if the old name does not exist or if the new name already belongs to the namespace.
- Throws:
SnapshotAccessControlException- if path is in RO snapshotIOException- an I/O error occurred
-
rename2
public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException Description copied from interface:ClientProtocolRename src to dst.- Fails if src is a file and dst is a directory.
- Fails if src is a directory and dst is a file.
- Fails if the parent of dst does not exist or is a file.
Without OVERWRITE option, rename fails if the dst already exists. With OVERWRITE option, rename overwrites the dst, if it is a file or an empty directory. Rename fails if dst is a non-empty directory.
This implementation of rename is atomic.
- Specified by:
rename2in interfaceClientProtocol- Parameters:
src- existing file or directory name.dst- new name.options- Rename options- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedDSQuotaExceededException- If rename violates disk space quota restrictionorg.apache.hadoop.fs.FileAlreadyExistsException- Ifdstalready exists andoptionshasOptions.Rename.OVERWRITEoption false.FileNotFoundException- Ifsrcdoes not existNSQuotaExceededException- If rename violates namespace quota restrictionorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofdstis not a directorySafeModeException- rename not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrcordstcontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
concat
Description copied from interface:ClientProtocolMoves blocks from srcs to trg and delete srcs.- Specified by:
concatin interfaceClientProtocol- Parameters:
trg- existing filesrcs- - list of existing files (same block size, same replication)- Throws:
IOException- if some arguments are invalidorg.apache.hadoop.fs.UnresolvedLinkException- iftrgorsrcscontains a symlinkSnapshotAccessControlException- if path is in RO snapshot
-
delete
Description copied from interface:ClientProtocolDelete the given file or directory from the file system.same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
- Specified by:
deletein interfaceClientProtocol- Parameters:
src- existing namerecursive- if true deletes a non empty directory recursively, else throws an exception.- Returns:
- true only if the existing file or directory was actually removed from the file system.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedFileNotFoundException- If filesrcis not foundSafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotorg.apache.hadoop.fs.PathIsNotEmptyDirectoryException- if path is a non-empty directory andrecursiveis set to falseIOException- If an I/O error occurred
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException Description copied from interface:ClientProtocolCreate a directory (or hierarchy of directories) with the given name and permission.- Specified by:
mkdirsin interfaceClientProtocol- Parameters:
src- The path of the directory being createdmasked- The masked permission of the directory being createdcreateParent- create missing parent directory if true- Returns:
- True if the operation success.
- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedorg.apache.hadoop.fs.FileAlreadyExistsException- Ifsrcalready existsFileNotFoundException- If parent ofsrcdoes not exist andcreateParentis falseNSQuotaExceededException- If file creation violates quota restrictionorg.apache.hadoop.fs.ParentNotDirectoryException- If parent ofsrcis not a directorySafeModeException- create not allowed in safemodeorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkSnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred. RunTimeExceptions:
-
getListing
public DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException Description copied from interface:ClientProtocolGet a partial listing of the indicated directory.- Specified by:
getListingin interfaceClientProtocol- Parameters:
src- the directory namestartAfter- the name to start listing after encoded in java UTF8needLocation- if the FileStatus should contain block locations- Returns:
- a partial listing starting after startAfter
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- Ifsrccontains a symlinkIOException- If an I/O error occurred
-
getBatchedListing
public BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException Description copied from interface:ClientProtocolGet a partial listing of the input directories- Specified by:
getBatchedListingin interfaceClientProtocol- Parameters:
srcs- the input directoriesstartAfter- the name to start listing after encoded in Java UTF8needLocation- if the FileStatus should contain block locations- Returns:
- a partial listing starting after startAfter. null if the input is empty
- Throws:
IOException- if an I/O error occurred
-
renewLease
Description copied from interface:ClientProtocolClient programs can cause stateful changes in the NameNode that affect other clients. A client may obtain a file and neither abandon nor complete it. A client might hold a series of locks that prevent other clients from proceeding. Clearly, it would be bad if a client held a bunch of locks that it never gave up. This can happen easily if the client dies unexpectedly.So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
- Specified by:
renewLeasein interfaceClientProtocolnamespaces- The full Namespace list that the renewLease rpc should be forwarded by RBF. Tips: NN side, this value should be null. RBF side, if this value is null, this rpc will be forwarded to all available namespaces, else this rpc will be forwarded to the special namespaces.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedIOException- If an I/O error occurred
-
recoverLease
Description copied from interface:ClientProtocolStart lease recovery. Lightweight NameNode operation to trigger lease recovery- Specified by:
recoverLeasein interfaceClientProtocol- Parameters:
src- path of the file to start lease recoveryclientName- name of the current client- Returns:
- true if the file is already closed
- Throws:
IOException
-
getStats
Description copied from interface:ClientProtocolGet an array of aggregated statistics combining blocks of both typeBlockType.CONTIGUOUSandBlockType.STRIPEDin the filesystem. Use public constants likeClientProtocol.GET_STATS_CAPACITY_IDXin place of actual numbers to index into the array.- [0] contains the total storage capacity of the system, in bytes.
- [1] contains the total used space of the system, in bytes.
- [2] contains the available storage of the system, in bytes.
- [3] contains number of low redundancy blocks in the system.
- [4] contains number of corrupt blocks.
- [5] contains number of blocks without any good replicas left.
- [6] contains number of blocks which have replication factor 1 and have lost the only replica.
- [7] contains number of bytes that are at risk for deletion.
- [8] contains number of pending deletion blocks.
- Specified by:
getStatsin interfaceClientProtocol- Throws:
IOException
-
getReplicatedBlockStats
Description copied from interface:ClientProtocolGet statistics pertaining to blocks of typeBlockType.CONTIGUOUSin the filesystem.- Specified by:
getReplicatedBlockStatsin interfaceClientProtocol- Throws:
IOException
-
getECBlockGroupStats
Description copied from interface:ClientProtocolGet statistics pertaining to blocks of typeBlockType.STRIPEDin the filesystem.- Specified by:
getECBlockGroupStatsin interfaceClientProtocol- Throws:
IOException
-
getDatanodeReport
Description copied from interface:ClientProtocolGet a report on the system's current datanodes. One DatanodeInfo object is returned for each DataNode. Return live datanodes if type is LIVE; dead datanodes if type is DEAD; otherwise all datanodes if type is ALL.- Specified by:
getDatanodeReportin interfaceClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public DatanodeStorageReport[] getDatanodeStorageReport(HdfsConstants.DatanodeReportType type) throws IOException Description copied from interface:ClientProtocolGet a report on the current datanode storages.- Specified by:
getDatanodeStorageReportin interfaceClientProtocol- Throws:
IOException
-
getPreferredBlockSize
Description copied from interface:ClientProtocolGet the block size for the given file.- Specified by:
getPreferredBlockSizein interfaceClientProtocol- Parameters:
filename- The name of the file- Returns:
- The number of bytes in each block
- Throws:
IOExceptionorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.
-
setSafeMode
public boolean setSafeMode(HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException Description copied from interface:ClientProtocolEnter, leave or get safe mode.Safe mode is a name node state when it
- does not accept changes to name space (read-only), and
- does not replicate or delete blocks.
Safe mode is entered automatically at name node startup. Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER,false).At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least
dfs.namenode.replication.minreplicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER,false)then the name node stays in safe mode until it is manually turned off usingsetSafeMode(SafeModeAction.SAFEMODE_LEAVE,false). Current state of the name node can be verified usingsetSafeMode(SafeModeAction.SAFEMODE_GET,false)Configuration parameters:
dfs.safemode.threshold.pctis the threshold parameter.
dfs.safemode.extensionis the safe mode extension parameter.
dfs.namenode.replication.minis the minimal replication parameter.Special cases:
The name node does not enter safe mode at startup if the threshold is set to 0 or if the name space is empty.
If the threshold is set to 1 then all blocks need to have at least minimal replication.
If the threshold value is greater than 1 then the name node will not be able to turn off safe mode automatically.
Safe mode can always be turned off manually.- Specified by:
setSafeModein interfaceClientProtocol- Parameters:
action-- 0 leave safe mode;
- 1 enter safe mode;
- 2 get safe mode state.
isChecked- If true then action will be done only in ActiveNN.- Returns:
- 0 if the safe mode is OFF or
- 1 if the safe mode is ON.
- Throws:
IOException
-
saveNamespace
Description copied from interface:ClientProtocolSave namespace image.Saves current namespace into storage directories and reset edits log. Requires superuser privilege and safe mode.
- Specified by:
saveNamespacein interfaceClientProtocol- Parameters:
timeWindow- NameNode does a checkpoint if the latest checkpoint was done beyond the given time period (in seconds).txGap- NameNode does a checkpoint if the gap between the latest checkpoint and the latest transaction id is greater this gap.- Returns:
- whether an extra checkpoint has been done
- Throws:
IOException- if image creation failed.
-
rollEdits
Description copied from interface:ClientProtocolRoll the edit log. Requires superuser privileges.- Specified by:
rollEditsin interfaceClientProtocol- Returns:
- the txid of the new segment
- Throws:
org.apache.hadoop.security.AccessControlException- if the superuser privilege is violatedIOException- if log roll fails
-
restoreFailedStorage
Description copied from interface:ClientProtocolEnable/Disable restore failed storage.sets flag to enable restore of failed storage replicas
- Specified by:
restoreFailedStoragein interfaceClientProtocol- Throws:
org.apache.hadoop.security.AccessControlException- if the superuser privilege is violated.IOException
-
refreshNodes
Description copied from interface:ClientProtocolTells the namenode to reread the hosts and exclude files.- Specified by:
refreshNodesin interfaceClientProtocol- Throws:
IOException
-
finalizeUpgrade
Description copied from interface:ClientProtocolFinalize previous upgrade. Remove file system state saved during the upgrade. The upgrade will become irreversible.- Specified by:
finalizeUpgradein interfaceClientProtocol- Throws:
IOException
-
upgradeStatus
Description copied from interface:ClientProtocolGet status of upgrade - finalized or not.- Specified by:
upgradeStatusin interfaceClientProtocol- Returns:
- true if upgrade is finalized or if no upgrade is in progress and false otherwise.
- Throws:
IOException
-
rollingUpgrade
public RollingUpgradeInfo rollingUpgrade(HdfsConstants.RollingUpgradeAction action) throws IOException Description copied from interface:ClientProtocolRolling upgrade operations.- Specified by:
rollingUpgradein interfaceClientProtocol- Parameters:
action- either query, prepare or finalize.- Returns:
- rolling upgrade information. On query, if no upgrade is in progress, returns null.
- Throws:
IOException
-
listCorruptFileBlocks
- Specified by:
listCorruptFileBlocksin interfaceClientProtocol- Returns:
- CorruptFileBlocks, containing a list of corrupt files (with duplicates if there is more than one corrupt block in a file) and a cookie
- Throws:
IOException- Each call returns a subset of the corrupt files in the system. To obtain all corrupt files, call this method repeatedly and each time pass in the cookie returned from the previous call.
-
metaSave
Description copied from interface:ClientProtocolDumps namenode data structures into specified file. If the file already exists, then append.- Specified by:
metaSavein interfaceClientProtocol- Throws:
IOException
-
getFileInfo
Description copied from interface:ClientProtocolGet the file info for a specific file or directory.- Specified by:
getFileInfoin interfaceClientProtocol- Parameters:
src- The string representation of the path to the file- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.IOException- If an I/O error occurred
-
getLocatedFileInfo
public HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException Description copied from interface:ClientProtocolGet the file info for a specific file or directory withLocatedBlocks.- Specified by:
getLocatedFileInfoin interfaceClientProtocol- Parameters:
src- The string representation of the path to the fileneedBlockToken- Generate block tokens forLocatedBlocks- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundIOException- If an I/O error occurred
-
getFileLinkInfo
Description copied from interface:ClientProtocolGet the file info for a specific file or directory. If the path refers to a symlink then the FileStatus of the symlink is returned.- Specified by:
getFileLinkInfoin interfaceClientProtocol- Parameters:
src- The string representation of the path to the file- Returns:
- object containing information regarding the file or null if file not found
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkIOException- If an I/O error occurred
-
getContentSummary
Description copied from interface:ClientProtocolGetContentSummaryrooted at the specified directory.- Specified by:
getContentSummaryin interfaceClientProtocol- Parameters:
path- The string representation of the path- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifpathcontains a symlink.IOException- If an I/O error occurred
-
setQuota
public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException Description copied from interface:ClientProtocolSet the quota for a directory.- Specified by:
setQuotain interfaceClientProtocol- Parameters:
path- The string representation of the path to the directorynamespaceQuota- Limit on the number of names in the tree rooted at the directorystoragespaceQuota- Limit on storage space occupied all the files under this directory.type- StorageType that the space quota is intended to be set on. It may be null when called by traditional space/namespace quota. When type is is not null, the storagespaceQuota parameter is for type specified and namespaceQuota must beHdfsConstants.QUOTA_DONT_SET.
The quota can have three types of values : (1) 0 or more will set the quota to that value, (2)HdfsConstants.QUOTA_DONT_SETimplies the quota will not be changed, and (3)HdfsConstants.QUOTA_RESETimplies the quota will be reset. Any other value is a runtime error.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundQuotaExceededException- if the directory size is greater than the given quotaorg.apache.hadoop.fs.UnresolvedLinkException- if thepathcontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
fsync
Description copied from interface:ClientProtocolWrite all metadata for this file into persistent storage. The file must be currently open for writing.- Specified by:
fsyncin interfaceClientProtocol- Parameters:
src- The string representation of the pathfileId- The inode ID, or GRANDFATHER_INODE_ID if the client is too old to support fsync with inode IDs.client- The string representation of the clientlastBlockLength- The length of the last block (under construction) to be reported to NameNode- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.IOException- If an I/O error occurred
-
setTimes
Description copied from interface:ClientProtocolSets the modification and access time of the file to the specified time.- Specified by:
setTimesin interfaceClientProtocol- Parameters:
src- The string representation of the pathmtime- The number of milliseconds since Jan 1, 1970. Setting negative mtime means that modification time should not be set by this call.atime- The number of milliseconds since Jan 1, 1970. Setting negative atime means that access time should not be set by this call.- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
createSymlink
public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerm, boolean createParent) throws IOException Description copied from interface:ClientProtocolCreate symlink to a file or directory.- Specified by:
createSymlinkin interfaceClientProtocol- Parameters:
target- The path of the destination that the link points to.link- The path of the link being created.dirPerm- permissions to use when creating parent directoriescreateParent- - if true then missing parent dirs are created if false then parent must exist- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedorg.apache.hadoop.fs.FileAlreadyExistsException- If filelinkalready existsFileNotFoundException- If parent oflinkdoes not exist andcreateParentis falseorg.apache.hadoop.fs.ParentNotDirectoryException- If parent oflinkis not a directory.org.apache.hadoop.fs.UnresolvedLinkException- iflinkcontains a symlink.SnapshotAccessControlException- if path is in RO snapshotIOException- If an I/O error occurred
-
getLinkTarget
Description copied from interface:ClientProtocolReturn the target of the given symlink. If there is an intermediate symlink in the path (ie a symlink leading up to the final path component) then the given path is returned with this symlink resolved.- Specified by:
getLinkTargetin interfaceClientProtocol- Parameters:
path- The path with a link that needs resolution.- Returns:
- The path after resolving the first symbolic link in the path.
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- Ifpathdoes not existIOException- If the given path does not refer to a symlink or an I/O error occurred
-
updateBlockForPipeline
public LocatedBlock updateBlockForPipeline(ExtendedBlock block, String clientName) throws IOException Description copied from interface:ClientProtocolGet a new generation stamp together with an access token for a block under construction This method is called only when a client needs to recover a failed pipeline or set up a pipeline for appending to a block.- Specified by:
updateBlockForPipelinein interfaceClientProtocol- Parameters:
block- a blockclientName- the name of the client- Returns:
- a located block with a new generation stamp and an access token
- Throws:
IOException- if any error occurs
-
updatePipeline
public void updatePipeline(String clientName, ExtendedBlock oldBlock, ExtendedBlock newBlock, DatanodeID[] newNodes, String[] storageIDs) throws IOException Description copied from interface:ClientProtocolUpdate a pipeline for a block under construction.- Specified by:
updatePipelinein interfaceClientProtocol- Parameters:
clientName- the name of the clientoldBlock- the old blocknewBlock- the new block containing new generation stamp and lengthnewNodes- datanodes in the pipeline- Throws:
IOException- if any error occurs
-
getDelegationToken
public org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException Description copied from interface:ClientProtocolGet a valid Delegation Token.- Specified by:
getDelegationTokenin interfaceClientProtocol- Parameters:
renewer- the designated renewer for the token- Throws:
IOException
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Description copied from interface:ClientProtocolRenew an existing delegation token.- Specified by:
renewDelegationTokenin interfaceClientProtocol- Parameters:
token- delegation token obtained earlier- Returns:
- the new expiration time
- Throws:
IOException
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<DelegationTokenIdentifier> token) throws IOException Description copied from interface:ClientProtocolCancel an existing delegation token.- Specified by:
cancelDelegationTokenin interfaceClientProtocol- Parameters:
token- delegation token- Throws:
IOException
-
setBalancerBandwidth
Description copied from interface:ClientProtocolTell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.- Specified by:
setBalancerBandwidthin interfaceClientProtocol- Parameters:
bandwidth- Blanacer bandwidth in bytes per second for this datanode.- Throws:
IOException
-
isMethodSupported
- Specified by:
isMethodSupportedin interfaceorg.apache.hadoop.ipc.ProtocolMetaInterface- Throws:
IOException
-
getDataEncryptionKey
- Specified by:
getDataEncryptionKeyin interfaceClientProtocol- Returns:
- encryption key so a client can encrypt data sent via the DataTransferProtocol to/from DataNodes.
- Throws:
IOException
-
isFileClosed
Description copied from interface:ClientProtocolGet the close status of a file.- Specified by:
isFileClosedin interfaceClientProtocol- Parameters:
src- The string representation of the path to the file- Returns:
- return true if file is closed
- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filesrcis not foundorg.apache.hadoop.fs.UnresolvedLinkException- if the path contains a symlink.IOException- If an I/O error occurred
-
getUnderlyingProxyObject
- Specified by:
getUnderlyingProxyObjectin interfaceorg.apache.hadoop.ipc.ProtocolTranslator
-
createSnapshot
Description copied from interface:ClientProtocolCreate a snapshot.- Specified by:
createSnapshotin interfaceClientProtocol- Parameters:
snapshotRoot- the path that is being snapshottedsnapshotName- name of the snapshot created- Returns:
- the snapshot path.
- Throws:
IOException
-
deleteSnapshot
Description copied from interface:ClientProtocolDelete a specific snapshot of a snapshottable directory.- Specified by:
deleteSnapshotin interfaceClientProtocol- Parameters:
snapshotRoot- The snapshottable directorysnapshotName- Name of the snapshot for the snapshottable directory- Throws:
IOException
-
allowSnapshot
Description copied from interface:ClientProtocolAllow snapshot on a directory.- Specified by:
allowSnapshotin interfaceClientProtocol- Parameters:
snapshotRoot- the directory to be snapped- Throws:
IOException- on error
-
disallowSnapshot
Description copied from interface:ClientProtocolDisallow snapshot on a directory.- Specified by:
disallowSnapshotin interfaceClientProtocol- Parameters:
snapshotRoot- the directory to disallow snapshot- Throws:
IOException- on error
-
renameSnapshot
public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException Description copied from interface:ClientProtocolRename a snapshot.- Specified by:
renameSnapshotin interfaceClientProtocol- Parameters:
snapshotRoot- the directory path where the snapshot was takensnapshotOldName- old name of the snapshotsnapshotNewName- new name of the snapshot- Throws:
IOException
-
getSnapshottableDirListing
Description copied from interface:ClientProtocolGet the list of snapshottable directories that are owned by the current user. Return all the snapshottable directories if the current user is a super user.- Specified by:
getSnapshottableDirListingin interfaceClientProtocol- Returns:
- The list of all the current snapshottable directories.
- Throws:
IOException- If an I/O error occurred.
-
getSnapshotListing
Description copied from interface:ClientProtocolGet listing of all the snapshots for a snapshottable directory.- Specified by:
getSnapshotListingin interfaceClientProtocol- Returns:
- Information about all the snapshots for a snapshottable directory
- Throws:
IOException- If an I/O error occurred
-
getSnapshotDiffReport
public SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String fromSnapshot, String toSnapshot) throws IOException Description copied from interface:ClientProtocolGet the difference between two snapshots, or between a snapshot and the current tree of a directory.- Specified by:
getSnapshotDiffReportin interfaceClientProtocol- Parameters:
snapshotRoot- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.- Returns:
- The difference report represented as a
SnapshotDiffReport. - Throws:
IOException- on error
-
getSnapshotDiffReportListing
public SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String fromSnapshot, String toSnapshot, byte[] startPath, int index) throws IOException Description copied from interface:ClientProtocolGet the difference between two snapshots of a directory iteratively.- Specified by:
getSnapshotDiffReportListingin interfaceClientProtocol- Parameters:
snapshotRoot- full path of the directory where snapshots are takenfromSnapshot- snapshot name of the from point. Null indicates the current treetoSnapshot- snapshot name of the to point. Null indicates the current tree.startPath- path relative to the snapshottable root directory from where the snapshotdiff computation needs to start across multiple rpc callsindex- index in the created or deleted list of the directory at which the snapshotdiff computation stopped during the last rpc call as the no of entries exceeded the snapshotdiffentry limit. -1 indicates, the snapshotdiff compuatation needs to start right from the startPath provided.- Returns:
- The difference report represented as a
SnapshotDiffReport. - Throws:
IOException- on error
-
addCacheDirective
public long addCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException Description copied from interface:ClientProtocolAdd a CacheDirective to the CacheManager.- Specified by:
addCacheDirectivein interfaceClientProtocol- Parameters:
directive- A CacheDirectiveInfo to be addedflags-CacheFlags to use for this operation.- Returns:
- A CacheDirectiveInfo associated with the added directive
- Throws:
IOException- if the directive could not be added
-
modifyCacheDirective
public void modifyCacheDirective(CacheDirectiveInfo directive, EnumSet<CacheFlag> flags) throws IOException Description copied from interface:ClientProtocolModify a CacheDirective in the CacheManager.- Specified by:
modifyCacheDirectivein interfaceClientProtocolflags-CacheFlags to use for this operation.- Throws:
IOException- if the directive could not be modified
-
removeCacheDirective
Description copied from interface:ClientProtocolRemove a CacheDirectiveInfo from the CacheManager.- Specified by:
removeCacheDirectivein interfaceClientProtocol- Parameters:
id- of a CacheDirectiveInfo- Throws:
IOException- if the cache directive could not be removed
-
listCacheDirectives
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CacheDirectiveEntry> listCacheDirectives(long prevId, CacheDirectiveInfo filter) throws IOException Description copied from interface:ClientProtocolList the set of cached paths of a cache pool. Incrementally fetches results from the server.- Specified by:
listCacheDirectivesin interfaceClientProtocol- Parameters:
prevId- The last listed entry ID, or -1 if this is the first call to listCacheDirectives.filter- Parameters to use to filter the list results, or null to display all directives visible to us.- Returns:
- A batch of CacheDirectiveEntry objects.
- Throws:
IOException
-
addCachePool
Description copied from interface:ClientProtocolAdd a new cache pool.- Specified by:
addCachePoolin interfaceClientProtocol- Parameters:
info- Description of the new cache pool- Throws:
IOException- If the request could not be completed.
-
modifyCachePool
Description copied from interface:ClientProtocolModify an existing cache pool.- Specified by:
modifyCachePoolin interfaceClientProtocol- Parameters:
req- The request to modify a cache pool.- Throws:
IOException- If the request could not be completed.
-
removeCachePool
Description copied from interface:ClientProtocolRemove a cache pool.- Specified by:
removeCachePoolin interfaceClientProtocol- Parameters:
cachePoolName- name of the cache pool to remove.- Throws:
IOException- if the cache pool did not exist, or could not be removed.
-
listCachePools
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<CachePoolEntry> listCachePools(String prevKey) throws IOException Description copied from interface:ClientProtocolList the set of cache pools. Incrementally fetches results from the server.- Specified by:
listCachePoolsin interfaceClientProtocol- Parameters:
prevKey- name of the last pool listed, or the empty string if this is the first invocation of listCachePools- Returns:
- A batch of CachePoolEntry objects.
- Throws:
IOException
-
modifyAclEntries
public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from interface:ClientProtocolModifies ACL entries of files and directories. This method can add new ACL entries or modify the permissions on existing ACL entries. All existing ACL entries that are not specified in this call are retained without changes. (Modifications are merged into the current ACL.)- Specified by:
modifyAclEntriesin interfaceClientProtocol- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from interface:ClientProtocolRemoves ACL entries from files and directories. Other ACL entries are retained.- Specified by:
removeAclEntriesin interfaceClientProtocol- Throws:
IOException
-
removeDefaultAcl
Description copied from interface:ClientProtocolRemoves all default ACL entries from files and directories.- Specified by:
removeDefaultAclin interfaceClientProtocol- Throws:
IOException
-
removeAcl
Description copied from interface:ClientProtocolRemoves all but the base ACL entries of files and directories. The entries for user, group, and others are retained for compatibility with permission bits.- Specified by:
removeAclin interfaceClientProtocol- Throws:
IOException
-
setAcl
public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException Description copied from interface:ClientProtocolFully replaces ACL of files and directories, discarding all existing entries.- Specified by:
setAclin interfaceClientProtocol- Throws:
IOException
-
getAclStatus
Description copied from interface:ClientProtocolGets the ACLs of files and directories.- Specified by:
getAclStatusin interfaceClientProtocol- Throws:
IOException
-
createEncryptionZone
Description copied from interface:ClientProtocolCreate an encryption zone.- Specified by:
createEncryptionZonein interfaceClientProtocol- Throws:
IOException
-
getEZForPath
Description copied from interface:ClientProtocolGet the encryption zone for a path.- Specified by:
getEZForPathin interfaceClientProtocol- Throws:
IOException
-
listEncryptionZones
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<EncryptionZone> listEncryptionZones(long id) throws IOException Description copied from interface:ClientProtocolUsed to implement cursor-based batched listing ofEncryptionZones.- Specified by:
listEncryptionZonesin interfaceClientProtocol- Parameters:
id- ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.- Returns:
- Batch of encryption zones.
- Throws:
IOException
-
setErasureCodingPolicy
Description copied from interface:ClientProtocolSet an erasure coding policy on a specified path.- Specified by:
setErasureCodingPolicyin interfaceClientProtocol- Parameters:
src- The path to set policy on.ecPolicyName- The erasure coding policy name.- Throws:
IOException
-
unsetErasureCodingPolicy
Description copied from interface:ClientProtocolUnset erasure coding policy from a specified path.- Specified by:
unsetErasureCodingPolicyin interfaceClientProtocol- Parameters:
src- The path to unset policy.- Throws:
IOException
-
getECTopologyResultForPolicies
public ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException Description copied from interface:ClientProtocolVerifies if the given policies are supported in the given cluster setup. If not policy is specified checks for all enabled policies.- Specified by:
getECTopologyResultForPoliciesin interfaceClientProtocol- Parameters:
policyNames- name of policies.- Returns:
- the result if the given policies are supported in the cluster setup
- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(String zone, HdfsConstants.ReencryptAction action) throws IOException Description copied from interface:ClientProtocolUsed to implement re-encryption of encryption zones.- Specified by:
reencryptEncryptionZonein interfaceClientProtocol- Parameters:
zone- the encryption zone to re-encrypt.action- the action for the re-encryption.- Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<ZoneReencryptionStatus> listReencryptionStatus(long id) throws IOException Description copied from interface:ClientProtocolUsed to implement cursor-based batched listing ofZoneReencryptionStatuss.- Specified by:
listReencryptionStatusin interfaceClientProtocol- Parameters:
id- ID of the last item in the previous batch. If there is no previous batch, a negative value can be used.- Returns:
- Batch of encryption zones.
- Throws:
IOException
-
setXAttr
public void setXAttr(String src, XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException Description copied from interface:ClientProtocolSet xattr of a file or directory. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Specified by:
setXAttrin interfaceClientProtocol- Parameters:
src- file or directoryxAttr-XAttrto setflag- set flag- Throws:
IOException
-
getXAttrs
Description copied from interface:ClientProtocolGet xattrs of a file or directory. Values in xAttrs parameter are ignored. If xAttrs is null or empty, this is the same as getting all xattrs of the file or directory. Only those xattrs for which the logged-in user has permissions to view are returned.Refer to the HDFS extended attributes user documentation for details.
- Specified by:
getXAttrsin interfaceClientProtocol- Parameters:
src- file or directoryxAttrs- xAttrs to get- Returns:
XAttrlist- Throws:
IOException
-
listXAttrs
Description copied from interface:ClientProtocolList the xattrs names for a file or directory. Only the xattr names for which the logged in user has the permissions to access will be returned.Refer to the HDFS extended attributes user documentation for details.
- Specified by:
listXAttrsin interfaceClientProtocol- Parameters:
src- file or directory- Returns:
XAttrlist- Throws:
IOException
-
removeXAttr
Description copied from interface:ClientProtocolRemove xattr of a file or directory.Value in xAttr parameter is ignored. The name must be prefixed with the namespace followed by ".". For example, "user.attr".Refer to the HDFS extended attributes user documentation for details.
- Specified by:
removeXAttrin interfaceClientProtocol- Parameters:
src- file or directoryxAttr-XAttrto remove- Throws:
IOException
-
checkAccess
public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException Description copied from interface:ClientProtocolChecks if the user can access a path. The mode specifies which access checks to perform. If the requested permissions are granted, then the method returns normally. If access is denied, then the method throws anAccessControlException. In general, applications should avoid using this method, due to the risk of time-of-check/time-of-use race conditions. The permissions on a file may change immediately after the access call returns.- Specified by:
checkAccessin interfaceClientProtocol- Parameters:
path- Path to checkmode- type of access to check- Throws:
org.apache.hadoop.security.AccessControlException- if access is deniedFileNotFoundException- if the path does not existIOException- see specific implementation
-
setStoragePolicy
Description copied from interface:ClientProtocolSet the storage policy for a file/directory.- Specified by:
setStoragePolicyin interfaceClientProtocol- Parameters:
src- Path of an existing file/directory.policyName- The name of the storage policy- Throws:
SnapshotAccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundQuotaExceededException- If changes violate the quota restrictionIOException
-
unsetStoragePolicy
Description copied from interface:ClientProtocolUnset the storage policy set for a given file or directory.- Specified by:
unsetStoragePolicyin interfaceClientProtocol- Parameters:
src- Path of an existing file/directory.- Throws:
SnapshotAccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundQuotaExceededException- If changes violate the quota restrictionIOException
-
getStoragePolicy
Description copied from interface:ClientProtocolGet the storage policy for a file/directory.- Specified by:
getStoragePolicyin interfaceClientProtocol- Parameters:
path- Path of an existing file/directory.- Throws:
org.apache.hadoop.security.AccessControlException- If access is deniedorg.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlinkFileNotFoundException- If file/dirsrcis not foundIOException
-
getStoragePolicies
Description copied from interface:ClientProtocolGet all the available block storage policies.- Specified by:
getStoragePoliciesin interfaceClientProtocol- Returns:
- All the in-use block storage policies currently.
- Throws:
IOException
-
getCurrentEditLogTxid
Description copied from interface:ClientProtocolGet the highest txid the NameNode knows has been written to the edit log, or -1 if the NameNode's edit log is not yet open for write. Used as the starting point for the inotify event stream.- Specified by:
getCurrentEditLogTxidin interfaceClientProtocol- Throws:
IOException
-
getEditsFromTxid
Description copied from interface:ClientProtocolGet an ordered list of batches of events corresponding to the edit log transactions for txids equal to or greater than txid.- Specified by:
getEditsFromTxidin interfaceClientProtocol- Throws:
IOException
-
addErasureCodingPolicies
public AddErasureCodingPolicyResponse[] addErasureCodingPolicies(ErasureCodingPolicy[] policies) throws IOException Description copied from interface:ClientProtocolAdd Erasure coding policies to HDFS. For each policy input, schema and cellSize are musts, name and id are ignored. They will be automatically created and assigned by Namenode once the policy is successfully added, and will be returned in the response.- Specified by:
addErasureCodingPoliciesin interfaceClientProtocol- Parameters:
policies- The user defined ec policy list to add.- Returns:
- Return the response list of adding operations.
- Throws:
IOException
-
removeErasureCodingPolicy
Description copied from interface:ClientProtocolRemove erasure coding policy.- Specified by:
removeErasureCodingPolicyin interfaceClientProtocol- Parameters:
ecPolicyName- The name of the policy to be removed.- Throws:
IOException
-
enableErasureCodingPolicy
Description copied from interface:ClientProtocolEnable erasure coding policy.- Specified by:
enableErasureCodingPolicyin interfaceClientProtocol- Parameters:
ecPolicyName- The name of the policy to be enabled.- Throws:
IOException
-
disableErasureCodingPolicy
Description copied from interface:ClientProtocolDisable erasure coding policy.- Specified by:
disableErasureCodingPolicyin interfaceClientProtocol- Parameters:
ecPolicyName- The name of the policy to be disabled.- Throws:
IOException
-
getErasureCodingPolicies
Description copied from interface:ClientProtocolGet the erasure coding policies loaded in Namenode, excluding REPLICATION policy.- Specified by:
getErasureCodingPoliciesin interfaceClientProtocol- Throws:
IOException
-
getErasureCodingCodecs
Description copied from interface:ClientProtocolGet the erasure coding codecs loaded in Namenode.- Specified by:
getErasureCodingCodecsin interfaceClientProtocol- Throws:
IOException
-
getErasureCodingPolicy
Description copied from interface:ClientProtocolGet the information about the EC policy for the path. Null will be returned if directory or file has REPLICATION policy.- Specified by:
getErasureCodingPolicyin interfaceClientProtocol- Parameters:
src- path to get the info for- Throws:
IOException
-
getQuotaUsage
Description copied from interface:ClientProtocolGetQuotaUsagerooted at the specified directory. Note: due to HDFS-6763, standby/observer doesn't keep up-to-date info about quota usage, and thus even though this is ReadOnly, it can only be directed to the active namenode.- Specified by:
getQuotaUsagein interfaceClientProtocol- Parameters:
path- The string representation of the path- Throws:
org.apache.hadoop.security.AccessControlException- permission deniedFileNotFoundException- filepathis not foundorg.apache.hadoop.fs.UnresolvedLinkException- ifpathcontains a symlink.IOException- If an I/O error occurred
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId) throws IOException Deprecated.Description copied from interface:ClientProtocolList open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.- Specified by:
listOpenFilesin interfaceClientProtocol- Parameters:
prevId- the cursor INode id.- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<OpenFileEntry> listOpenFiles(long prevId, EnumSet<OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException Description copied from interface:ClientProtocolList open files in the system in batches. INode id is the cursor and the open files returned in a batch will have their INode ids greater than the cursor INode id. Open files can only be requested by super user and the the list across batches are not atomic.- Specified by:
listOpenFilesin interfaceClientProtocol- Parameters:
prevId- the cursor INode id.openFilesTypes- types to filter the open files.path- path to filter the open files.- Throws:
IOException
-
msync
Description copied from interface:ClientProtocolCalled by client to wait until the server has reached the state id of the client. The client and server state id are given by client side and server side alignment context respectively. This can be a blocking call.- Specified by:
msyncin interfaceClientProtocol- Throws:
IOException
-
satisfyStoragePolicy
Description copied from interface:ClientProtocolSatisfy the storage policy for a file/directory.- Specified by:
satisfyStoragePolicyin interfaceClientProtocol- Parameters:
src- Path of an existing file/directory.- Throws:
org.apache.hadoop.security.AccessControlException- If access is denied.org.apache.hadoop.fs.UnresolvedLinkException- ifsrccontains a symlink.FileNotFoundException- If file/dirsrcis not found.SafeModeException- append not allowed in safemode.IOException
-
getSlowDatanodeReport
Description copied from interface:ClientProtocolGet report on all of the slow Datanodes. Slow running datanodes are identified based on the Outlier detection algorithm, if slow peer tracking is enabled for the DFS cluster.- Specified by:
getSlowDatanodeReportin interfaceClientProtocol- Returns:
- Datanode report for slow running datanodes.
- Throws:
IOException- If an I/O error occurs.
-
getHAServiceState
Description copied from interface:ClientProtocolGet HA service state of the server.- Specified by:
getHAServiceStatein interfaceClientProtocol- Returns:
- server HA state
- Throws:
IOException
-
getEnclosingRoot
Description copied from interface:ClientProtocolGet the enclosing root for a path.- Specified by:
getEnclosingRootin interfaceClientProtocol- Throws:
IOException
-