Class RouterRpcServer
java.lang.Object
org.apache.hadoop.service.AbstractService
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.hadoop.hdfs.protocol.ClientProtocol,org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol,org.apache.hadoop.security.RefreshUserMappingsProtocol,org.apache.hadoop.service.Service,org.apache.hadoop.tools.GetUserMappingsProtocol
public class RouterRpcServer
extends org.apache.hadoop.service.AbstractService
implements org.apache.hadoop.hdfs.protocol.ClientProtocol, org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol, org.apache.hadoop.security.RefreshUserMappingsProtocol, org.apache.hadoop.tools.GetUserMappingsProtocol
This class is responsible for handling all of the RPC calls to the It is
created, started, and stopped by
Router. It implements the
ClientProtocol to mimic a
NameNode and proxies
the requests to the active
NameNode.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.hadoop.service.Service
org.apache.hadoop.service.Service.STATE -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final StringName service keyword to identify fan-out calls.Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX, GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_LOW_REDUNDANCY_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX, GET_STATS_PENDING_DELETION_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, STATS_ARRAY_LENGTH, versionIDFields inherited from interface org.apache.hadoop.tools.GetUserMappingsProtocol
versionIDFields inherited from interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
ACT_CHECKPOINT, ACT_SHUTDOWN, ACT_UNKNOWN, FATAL, NOTIFY, versionIDFields inherited from interface org.apache.hadoop.security.RefreshUserMappingsProtocol
versionID -
Constructor Summary
ConstructorsConstructorDescriptionRouterRpcServer(org.apache.hadoop.conf.Configuration conf, Router router, ActiveNamenodeResolver nnResolver, FileSubclusterResolver fileResolver) Construct a router RPC server. -
Method Summary
Modifier and TypeMethodDescriptionvoidabandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) org.apache.hadoop.hdfs.protocol.LocatedBlockaddBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.longaddCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidaddCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[]addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) voidallowSnapshot(String snapshotRoot) org.apache.hadoop.hdfs.protocol.LastBlockWithStatusappend(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) voidcancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidcheckAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) voidcheckOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op) Check if the Router is in safe mode.voidcheckOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op, boolean supported) Check if the Router is in safe mode.booleancomplete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) voidorg.apache.hadoop.hdfs.protocol.HdfsFileStatuscreate(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) voidcreateEncryptionZone(String src, String keyName) createSnapshot(String snapshotRoot, String snapshotName) voidcreateSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) booleanvoiddeleteSnapshot(String snapshotRoot, String snapshotName) voiddisableErasureCodingPolicy(String ecPolicyName) voiddisallowSnapshot(String snapshot) voidenableErasureCodingPolicy(String ecPolicyName) voidendCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, org.apache.hadoop.hdfs.server.namenode.CheckpointSignature sig) voiderrorReport(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, int errorCode, String msg) voidvoidorg.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockgetAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.org.apache.hadoop.hdfs.protocol.BatchedDirectoryListinggetBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeysorg.apache.hadoop.hdfs.protocol.LocatedBlocksgetBlockLocations(String src, long offset, long length) org.apache.hadoop.hdfs.server.protocol.BlocksWithLocationsgetBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long hotBlockTimeInterval, org.apache.hadoop.fs.StorageType storageType) Get ClientProtocol module implementation.org.apache.hadoop.fs.ContentSummarygetContentSummary(String path) getCreateLocationAsync(String src, List<RemoteLocation> locations) Get the location to create a file.longorg.apache.hadoop.hdfs.security.token.block.DataEncryptionKeyorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) Get the datanode report with a timeout.org.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReportAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) Get the datanode report with a timeout.org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) Get the list of datanodes per subcluster.getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) Get the list of datanodes per subcluster.getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) Get the list of datanodes per subcluster.org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) org.apache.hadoop.hdfs.protocol.ECBlockGroupStatsorg.apache.hadoop.hdfs.protocol.ECTopologyVerifierResultgetECTopologyResultForPolicies(String... policyNames) org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifestgetEditLogManifest(long sinceTxId) org.apache.hadoop.hdfs.inotify.EventBatchListgetEditsFromTxid(long txid) org.apache.hadoop.fs.PathgetEnclosingRoot(String src) org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[]org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyorg.apache.hadoop.hdfs.protocol.EncryptionZonegetEZForPath(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfo(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileLinkInfo(String src) String[]getGroupsForUser(String user) org.apache.hadoop.ha.HAServiceProtocol.HAServiceStategetLinkTarget(String path) org.apache.hadoop.hdfs.protocol.DirectoryListinggetListing(String src, byte[] startAfter, boolean needLocation) org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatusgetLocatedFileInfo(String src, boolean needBlockToken) getLocationsForPath(String path, boolean failIfLocked) Get the possible locations of a path in the federated cluster.getLocationsForPath(String path, boolean failIfLocked, boolean needQuotaVerify) Get the possible locations of a path in the federated cluster.longlonggetMostRecentNameNodeFileTxId(org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile nnf) Get the active namenode resolver.longGet quota module implementation.org.apache.hadoop.fs.QuotaUsagegetQuotaUsage(String path) static org.apache.hadoop.security.UserGroupInformationGet the user that is invoking this operation.org.apache.hadoop.hdfs.protocol.ReplicatedBlockStatsintGet the RPC security manager.Get the routerStateIdContext used by this server.Get the RPC address of the service.Get the RPC client to the Namenode.Get RPC metrics info.Get the RPC monitor and metrics.intorg.apache.hadoop.ipc.RPC.ServerAllow access to the client RPC server for testing.org.apache.hadoop.fs.FsServerDefaultsorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]org.apache.hadoop.hdfs.protocol.DatanodeInfo[]getSlowDatanodeReport(boolean requireResponse, long timeOutMs) Get the slow running datanodes report with a timeout.org.apache.hadoop.hdfs.protocol.DatanodeInfo[]getSlowDatanodeReportAsync(boolean requireResponse, long timeOutMs) Get the slow running datanodes report with a timeout.org.apache.hadoop.hdfs.protocol.SnapshotDiffReportgetSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListinggetSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) org.apache.hadoop.hdfs.protocol.SnapshotStatus[]getSnapshotListing(String snapshotRoot) org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[]long[]getStats()org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[]org.apache.hadoop.hdfs.protocol.BlockStoragePolicygetStoragePolicy(String path) Get the subcluster resolver.longList<org.apache.hadoop.fs.XAttr>voidinitAsyncThreadPools(org.apache.hadoop.conf.Configuration configuration) Init router async handlers and router async responders.<T> TinvokeAtAvailableNsAsync(RemoteMethod method, Class<T> clazz) Invokes the method at default namespace, if default namespace is not available then at the other available namespaces.booleanisAsync()booleanisFileClosed(String src) booleanisInvokeConcurrent(String path) Check if call needs to be invoked to all the locations.booleanCheck if a path should be in all subclusters.booleanbooleanorg.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry>listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry>listCachePools(String prevKey) org.apache.hadoop.hdfs.protocol.CorruptFileBlockslistCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone>listEncryptionZones(long prevId) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId) Deprecated.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus>listReencryptionStatus(long prevId) List<org.apache.hadoop.fs.XAttr>listXAttrs(String src) static <T> T[]merge(Map<FederationNamespaceInfo, T[]> map, Class<T> clazz) Merge the outputs from multiple namespaces.voidbooleanvoidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidmodifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidmodifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) voidmsync()booleanrecoverLease(String src, String clientName) voidreencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) voidvoidvoidorg.apache.hadoop.hdfs.server.protocol.NamenodeRegistrationregisterSubordinateNamenode(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) voidvoidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) voidremoveCachePool(String cachePoolName) voidremoveDefaultAcl(String src) voidremoveErasureCodingPolicy(String ecPolicyName) voidremoveXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) booleanDeprecated.voidvoidrenameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) longrenewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidrenewLease(String clientName, List<String> namespaces) voidreportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) booleanorg.apache.hadoop.hdfs.server.namenode.CheckpointSignaturelongorg.apache.hadoop.hdfs.protocol.RollingUpgradeInforollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) voidsatisfyStoragePolicy(String path) booleansaveNamespace(long timeWindow, long txGap) protected voidserviceInit(org.apache.hadoop.conf.Configuration configuration) protected voidprotected voidvoidvoidsetBalancerBandwidth(long bandwidth) voidsetErasureCodingPolicy(String src, String ecPolicyName) voidvoidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) voidsetQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) booleansetReplication(String src, short replication) booleansetSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) voidsetStoragePolicy(String src, String policyName) voidvoidsetXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) org.apache.hadoop.hdfs.server.protocol.NamenodeCommandstartCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) booleanvoidvoidunsetStoragePolicy(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockupdateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) voidupdatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) Datanode are not verified to be in the same nameservice as the old block.booleanorg.apache.hadoop.hdfs.server.protocol.NamespaceInfoMethods inherited from class org.apache.hadoop.service.AbstractService
close, getBlockers, getConfig, getFailureCause, getFailureState, getLifecycleHistory, getName, getServiceState, getStartTime, init, isInState, noteFailure, putBlocker, registerGlobalListener, registerServiceListener, removeBlocker, setConfig, start, stop, toString, unregisterGlobalListener, unregisterServiceListener, waitForServiceToStop
-
Field Details
-
CONCURRENT_NS
Name service keyword to identify fan-out calls.- See Also:
-
-
Constructor Details
-
RouterRpcServer
public RouterRpcServer(org.apache.hadoop.conf.Configuration conf, Router router, ActiveNamenodeResolver nnResolver, FileSubclusterResolver fileResolver) throws IOException Construct a router RPC server.- Parameters:
conf- HDFS Configuration.router- A router using this RPC server.nnResolver- The NN resolver instance to determine active NNs in HA.fileResolver- File resolver to resolve file paths to subclusters.- Throws:
IOException- If the RPC server could not be created.
-
-
Method Details
-
initAsyncThreadPools
public void initAsyncThreadPools(org.apache.hadoop.conf.Configuration configuration) Init router async handlers and router async responders.- Parameters:
configuration- the configuration.
-
serviceInit
- Overrides:
serviceInitin classorg.apache.hadoop.service.AbstractService- Throws:
Exception
-
serviceStart
- Overrides:
serviceStartin classorg.apache.hadoop.service.AbstractService- Throws:
Exception
-
serviceStop
- Overrides:
serviceStopin classorg.apache.hadoop.service.AbstractService- Throws:
Exception
-
getRouterStateIdContext
Get the routerStateIdContext used by this server.- Returns:
- routerStateIdContext
-
getRouterSecurityManager
Get the RPC security manager.- Returns:
- RPC security manager.
-
getRPCClient
Get the RPC client to the Namenode.- Returns:
- RPC clients to the Namenodes.
-
getSubclusterResolver
Get the subcluster resolver.- Returns:
- Subcluster resolver.
-
getNamenodeResolver
Get the active namenode resolver.- Returns:
- Active namenode resolver.
-
getRPCMonitor
Get the RPC monitor and metrics.- Returns:
- RPC monitor and metrics.
-
getServer
@VisibleForTesting public org.apache.hadoop.ipc.RPC.Server getServer()Allow access to the client RPC server for testing.- Returns:
- The RPC server.
-
getRpcAddress
Get the RPC address of the service.- Returns:
- RPC service address.
-
checkOperation
public void checkOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op, boolean supported) throws org.apache.hadoop.ipc.StandbyException, UnsupportedOperationException Check if the Router is in safe mode. We should only see READ, WRITE, and UNCHECKED. It includes a default handler when we haven't implemented an operation. If not supported, it always throws an exception reporting the operation.- Parameters:
op- Category of the operation to check.supported- If the operation is supported or not. If not, it will throw an UnsupportedOperationException.- Throws:
org.apache.hadoop.ipc.StandbyException- If the Router is in safe mode and cannot serve client requests.UnsupportedOperationException- If the operation is not supported.
-
checkOperation
public void checkOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op) throws org.apache.hadoop.ipc.StandbyException Check if the Router is in safe mode. We should only see READ, WRITE, and UNCHECKED. This function should be called by all ClientProtocol functions.- Parameters:
op- Category of the operation to check.- Throws:
org.apache.hadoop.ipc.StandbyException- If the Router is in safe mode and cannot serve client requests.
-
invokeAtAvailableNsAsync
Invokes the method at default namespace, if default namespace is not available then at the other available namespaces. If the namespace is unavailable, retry with other namespaces. Asynchronous version of invokeAtAvailableNs method.- Type Parameters:
T- expected return type.- Parameters:
method- the remote method.clazz- the type of return value.- Returns:
- the response received after invoking method.
- Throws:
IOException- if there is no namespace available or other ioExceptions.
-
getDelegationToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
renewDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
cancelDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBlockLocations
public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException - Specified by:
getBlockLocationsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getServerDefaults
- Specified by:
getServerDefaultsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
create
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException - Specified by:
createin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getCreateLocationAsync
public RemoteLocation getCreateLocationAsync(String src, List<RemoteLocation> locations) throws IOException Get the location to create a file. It checks if the file already existed in one of the locations. Asynchronous version of getCreateLocation method.- Parameters:
src- Path of the file to check.locations- Prefetched locations for the file.- Returns:
- The remote location for this file.
- Throws:
IOException- If the file has no creation location.
-
append
public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException - Specified by:
appendin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
recoverLease
- Specified by:
recoverLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setReplication
- Specified by:
setReplicationin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setStoragePolicy
- Specified by:
setStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicies
- Specified by:
getStoragePoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setPermission
public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException - Specified by:
setPermissionin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setOwner
- Specified by:
setOwnerin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addBlock
public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.- Specified by:
addBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAdditionalDatanode
public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.- Specified by:
getAdditionalDatanodein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
abandonBlock
public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException - Specified by:
abandonBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
complete
public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException - Specified by:
completein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
updateBlockForPipeline
public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException - Specified by:
updateBlockForPipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
updatePipeline
public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException Datanode are not verified to be in the same nameservice as the old block. TODO This may require validation.- Specified by:
updatePipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getPreferredBlockSize
- Specified by:
getPreferredBlockSizein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename
Deprecated.- Specified by:
renamein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename2
public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException - Specified by:
rename2in interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
concat
- Specified by:
concatin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
truncate
- Specified by:
truncatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
delete
- Specified by:
deletein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException - Specified by:
mkdirsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renewLease
- Specified by:
renewLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getListing
public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBatchedListing
public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getBatchedListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileInfo
- Specified by:
getFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
isFileClosed
- Specified by:
isFileClosedin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileLinkInfo
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException - Specified by:
getFileLinkInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLocatedFileInfo
public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException - Specified by:
getLocatedFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStats
- Specified by:
getStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException Get the datanode report with a timeout.- Parameters:
type- Type of the datanode.requireResponse- If we require all the namespaces to report.timeOutMs- Time out for the reply in milliseconds.- Returns:
- List of datanodes.
- Throws:
IOException- If it cannot get the report.
-
getDatanodeReportAsync
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReportAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException Get the datanode report with a timeout. Asynchronous version of the getDatanodeReport method.- Parameters:
type- Type of the datanode.requireResponse- If we require all the namespaces to report.timeOutMs- Time out for the reply in milliseconds.- Returns:
- List of datanodes.
- Throws:
IOException- If it cannot get the report.
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeStorageReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeStorageReportMap
public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException Get the list of datanodes per subcluster.- Parameters:
type- Type of the datanodes to get.- Returns:
- nsId to datanode list.
- Throws:
IOException- If the method cannot be invoked remotely.
-
getDatanodeStorageReportMapAsync
public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Throws:
IOException
-
getDatanodeStorageReportMap
public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException Get the list of datanodes per subcluster.- Parameters:
type- Type of the datanodes to get.requireResponse- If true an exception will be thrown if all calls do not complete. If false exceptions are ignored and all data results successfully received are returned.timeOutMs- Time out for the reply in milliseconds.- Returns:
- nsId to datanode list.
- Throws:
IOException- If the method cannot be invoked remotely.
-
getDatanodeStorageReportMapAsync
public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException Get the list of datanodes per subcluster. Asynchronous version of getDatanodeStorageReportMap method.- Parameters:
type- Type of the datanodes to get.requireResponse- If true an exception will be thrown if all calls do not complete. If false exceptions are ignored and all data results successfully received are returned.timeOutMs- Time out for the reply in milliseconds.- Returns:
- nsId to datanode list.
- Throws:
IOException- If the method cannot be invoked remotely.
-
setSafeMode
public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException - Specified by:
setSafeModein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
restoreFailedStorage
- Specified by:
restoreFailedStoragein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
saveNamespace
- Specified by:
saveNamespacein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollEdits
- Specified by:
rollEditsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
refreshNodes
- Specified by:
refreshNodesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
finalizeUpgrade
- Specified by:
finalizeUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
upgradeStatus
- Specified by:
upgradeStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollingUpgrade
public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException - Specified by:
rollingUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
metaSave
- Specified by:
metaSavein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCorruptFileBlocks
public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException - Specified by:
listCorruptFileBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setBalancerBandwidth
- Specified by:
setBalancerBandwidthin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getContentSummary
- Specified by:
getContentSummaryin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
fsync
public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException - Specified by:
fsyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setTimes
- Specified by:
setTimesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSymlink
public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException - Specified by:
createSymlinkin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLinkTarget
- Specified by:
getLinkTargetin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
allowSnapshot
- Specified by:
allowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disallowSnapshot
- Specified by:
disallowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renameSnapshot
public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException - Specified by:
renameSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshottableDirListing
public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException- Specified by:
getSnapshottableDirListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotListing
public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException - Specified by:
getSnapshotListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReport
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException - Specified by:
getSnapshotDiffReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReportListing
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException - Specified by:
getSnapshotDiffReportListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCacheDirective
public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
addCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCacheDirective
public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
modifyCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCacheDirective
- Specified by:
removeCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCacheDirectives
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException - Specified by:
listCacheDirectivesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCachePool
- Specified by:
addCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCachePool
- Specified by:
modifyCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCachePool
- Specified by:
removeCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCachePools
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException - Specified by:
listCachePoolsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyAclEntries
public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
modifyAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
removeAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeDefaultAcl
- Specified by:
removeDefaultAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAcl
- Specified by:
removeAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setAcl
public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
setAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAclStatus
- Specified by:
getAclStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createEncryptionZone
- Specified by:
createEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEZForPath
- Specified by:
getEZForPathin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listEncryptionZones
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException - Specified by:
listEncryptionZonesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException - Specified by:
reencryptEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException - Specified by:
listReencryptionStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setXAttr
public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Specified by:
setXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getXAttrs
public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException - Specified by:
getXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listXAttrs
- Specified by:
listXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeXAttr
- Specified by:
removeXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
checkAccess
public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException - Specified by:
checkAccessin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getCurrentEditLogTxid
- Specified by:
getCurrentEditLogTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEditsFromTxid
- Specified by:
getEditsFromTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDataEncryptionKey
public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException- Specified by:
getDataEncryptionKeyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSnapshot
- Specified by:
createSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
deleteSnapshot
- Specified by:
deleteSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setQuota
public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException - Specified by:
setQuotain interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getQuotaUsage
- Specified by:
getQuotaUsagein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reportBadBlocks
public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException - Specified by:
reportBadBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetStoragePolicy
- Specified by:
unsetStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicy
public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException - Specified by:
getStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException- Specified by:
getErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingCodecs
- Specified by:
getErasureCodingCodecsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException - Specified by:
addErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeErasureCodingPolicy
- Specified by:
removeErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disableErasureCodingPolicy
- Specified by:
disableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
enableErasureCodingPolicy
- Specified by:
enableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicy
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException - Specified by:
getErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setErasureCodingPolicy
- Specified by:
setErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetErasureCodingPolicy
- Specified by:
unsetErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECTopologyResultForPolicies
public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException - Specified by:
getECTopologyResultForPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECBlockGroupStats
- Specified by:
getECBlockGroupStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getReplicatedBlockStats
public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException- Specified by:
getReplicatedBlockStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException Deprecated.- Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getHAServiceState
- Specified by:
getHAServiceStatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
msync
- Specified by:
msyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
satisfyStoragePolicy
- Specified by:
satisfyStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSlowDatanodeReport
- Specified by:
getSlowDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEnclosingRoot
- Specified by:
getEnclosingRootin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBlocks
public org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations getBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long hotBlockTimeInterval, org.apache.hadoop.fs.StorageType storageType) throws IOException - Specified by:
getBlocksin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getBlockKeys
public org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys getBlockKeys() throws IOException- Specified by:
getBlockKeysin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getTransactionID
- Specified by:
getTransactionIDin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getMostRecentCheckpointTxId
- Specified by:
getMostRecentCheckpointTxIdin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getMostRecentNameNodeFileTxId
public long getMostRecentNameNodeFileTxId(org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile nnf) throws IOException - Specified by:
getMostRecentNameNodeFileTxIdin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
rollEditLog
- Specified by:
rollEditLogin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
versionRequest
- Specified by:
versionRequestin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
errorReport
public void errorReport(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, int errorCode, String msg) throws IOException - Specified by:
errorReportin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
registerSubordinateNamenode
public org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registerSubordinateNamenode(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) throws IOException - Specified by:
registerSubordinateNamenodein interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
startCheckpoint
public org.apache.hadoop.hdfs.server.protocol.NamenodeCommand startCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) throws IOException - Specified by:
startCheckpointin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
endCheckpoint
public void endCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, org.apache.hadoop.hdfs.server.namenode.CheckpointSignature sig) throws IOException - Specified by:
endCheckpointin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getEditLogManifest
public org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest getEditLogManifest(long sinceTxId) throws IOException - Specified by:
getEditLogManifestin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
isUpgradeFinalized
- Specified by:
isUpgradeFinalizedin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
isRollingUpgrade
- Specified by:
isRollingUpgradein interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getNextSPSPath
- Specified by:
getNextSPSPathin interfaceorg.apache.hadoop.hdfs.server.protocol.NamenodeProtocol- Throws:
IOException
-
getLocationsForPath
public List<RemoteLocation> getLocationsForPath(String path, boolean failIfLocked) throws IOException Get the possible locations of a path in the federated cluster. During the get operation, it will do the quota verification.- Parameters:
path- Path to check.failIfLocked- Fail the request if locked (top mount point).- Returns:
- Prioritized list of locations in the federated cluster.
- Throws:
IOException- If the location for this path cannot be determined.
-
getLocationsForPath
public List<RemoteLocation> getLocationsForPath(String path, boolean failIfLocked, boolean needQuotaVerify) throws IOException Get the possible locations of a path in the federated cluster.- Parameters:
path- Path to check.failIfLocked- Fail the request if there is any mount point under the path.needQuotaVerify- If need to do the quota verification.- Returns:
- Prioritized list of locations in the federated cluster.
- Throws:
IOException- If the location for this path cannot be determined.
-
getRemoteUser
Get the user that is invoking this operation.- Returns:
- Remote user group information.
- Throws:
IOException- If we cannot get the user information.
-
merge
Merge the outputs from multiple namespaces.- Type Parameters:
T- The type of the objects to merge.- Parameters:
map- Namespace to Output array.clazz- Class of the values.- Returns:
- Array with the outputs.
-
getQuotaModule
Get quota module implementation.- Returns:
- Quota module implementation
-
getClientProtocolModule
Get ClientProtocol module implementation.- Returns:
- ClientProtocol implementation
-
getRPCMetrics
Get RPC metrics info.- Returns:
- The instance of FederationRPCMetrics.
-
isPathAll
Check if a path should be in all subclusters.- Parameters:
path- Path to check.- Returns:
- If a path should be in all subclusters.
-
isInvokeConcurrent
Check if call needs to be invoked to all the locations. The call is supposed to be invoked in all the locations in case the order of the mount entry is amongst HASH_ALL, RANDOM or SPACE or if the source is itself a mount entry.- Parameters:
path- The path on which the operation need to be invoked.- Returns:
- true if the call is supposed to invoked on all locations.
- Throws:
IOException- If an I/O error occurs.
-
refreshUserToGroupsMappings
- Specified by:
refreshUserToGroupsMappingsin interfaceorg.apache.hadoop.security.RefreshUserMappingsProtocol- Throws:
IOException
-
refreshSuperUserGroupsConfiguration
- Specified by:
refreshSuperUserGroupsConfigurationin interfaceorg.apache.hadoop.security.RefreshUserMappingsProtocol- Throws:
IOException
-
getGroupsForUser
- Specified by:
getGroupsForUserin interfaceorg.apache.hadoop.tools.GetUserMappingsProtocol- Throws:
IOException
-
getRouterFederationRenameCount
public int getRouterFederationRenameCount() -
getSchedulerJobCount
public int getSchedulerJobCount() -
refreshFairnessPolicyController
-
getSlowDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReport(boolean requireResponse, long timeOutMs) throws IOException Get the slow running datanodes report with a timeout.- Parameters:
requireResponse- If we require all the namespaces to report.timeOutMs- Time out for the reply in milliseconds.- Returns:
- List of datanodes.
- Throws:
IOException- If it cannot get the report.
-
getSlowDatanodeReportAsync
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReportAsync(boolean requireResponse, long timeOutMs) throws IOException Get the slow running datanodes report with a timeout. Asynchronous version of the getSlowDatanodeReport method.- Parameters:
requireResponse- If we require all the namespaces to report.timeOutMs- Time out for the reply in milliseconds.- Returns:
- List of datanodes.
- Throws:
IOException- If it cannot get the report.
-
isAsync
public boolean isAsync() -
getAsyncRouterHandlerExecutors
-
getRouterAsyncHandlerDefaultExecutor
-