Class RouterClientProtocol
java.lang.Object
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol
- All Implemented Interfaces:
org.apache.hadoop.hdfs.protocol.ClientProtocol
- Direct Known Subclasses:
RouterAsyncClientProtocol
public class RouterClientProtocol
extends Object
implements org.apache.hadoop.hdfs.protocol.ClientProtocol
Module that implements all the RPC calls in
ClientProtocol in the
RouterRpcServer.-
Nested Class Summary
Nested Classes -
Field Summary
Fields inherited from interface org.apache.hadoop.hdfs.protocol.ClientProtocol
GET_STATS_BYTES_IN_FUTURE_BLOCKS_IDX, GET_STATS_CAPACITY_IDX, GET_STATS_CORRUPT_BLOCKS_IDX, GET_STATS_LOW_REDUNDANCY_IDX, GET_STATS_MISSING_BLOCKS_IDX, GET_STATS_MISSING_REPL_ONE_BLOCKS_IDX, GET_STATS_PENDING_DELETION_BLOCKS_IDX, GET_STATS_REMAINING_IDX, GET_STATS_UNDER_REPLICATED_IDX, GET_STATS_USED_IDX, STATS_ARRAY_LENGTH, versionID -
Constructor Summary
ConstructorsConstructorDescriptionRouterClientProtocol(org.apache.hadoop.conf.Configuration conf, RouterRpcServer rpcServer) -
Method Summary
Modifier and TypeMethodDescriptionvoidabandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) org.apache.hadoop.hdfs.protocol.LocatedBlockaddBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.longaddCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidaddCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[]addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) protected org.apache.hadoop.fs.ContentSummaryaggregateContentSummary(Collection<org.apache.hadoop.fs.ContentSummary> summaries) Aggregate content summaries for each subcluster.voidallowSnapshot(String snapshotRoot) org.apache.hadoop.hdfs.protocol.LastBlockWithStatusappend(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) voidcancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidcheckAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) protected List<RemoteLocation>checkFaultTolerantRetry(RemoteMethod method, String src, IOException ioe, RemoteLocation excludeLoc, List<RemoteLocation> locations) Check if a remote method can be retried in other subclusters when it failed in the original destination.booleancomplete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) voidorg.apache.hadoop.hdfs.protocol.HdfsFileStatuscreate(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) voidcreateEncryptionZone(String src, String keyName) createSnapshot(String snapshotRoot, String snapshotName) voidcreateSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) booleanvoiddeleteSnapshot(String snapshotRoot, String snapshotName) voiddisableErasureCodingPolicy(String ecPolicyName) voiddisallowSnapshot(String snapshot) voidenableErasureCodingPolicy(String ecPolicyName) voidvoidorg.apache.hadoop.fs.permission.AclStatusgetAclStatus(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockgetAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.org.apache.hadoop.hdfs.protocol.BatchedDirectoryListinggetBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) org.apache.hadoop.hdfs.protocol.LocatedBlocksgetBlockLocations(String src, long offset, long length) org.apache.hadoop.fs.ContentSummarygetContentSummary(String path) longorg.apache.hadoop.hdfs.security.token.block.DataEncryptionKeyorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) Map<FederationNamespaceInfo,org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>> getDelegationTokens(org.apache.hadoop.io.Text renewer) The the delegation token from each name service.org.apache.hadoop.hdfs.protocol.ECBlockGroupStatsorg.apache.hadoop.hdfs.protocol.ECTopologyVerifierResultgetECTopologyResultForPolicies(String... policyNames) org.apache.hadoop.hdfs.inotify.EventBatchListgetEditsFromTxid(long txid) org.apache.hadoop.fs.PathgetEnclosingRoot(String src) org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[]org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyorg.apache.hadoop.hdfs.protocol.EncryptionZonegetEZForPath(String src) org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfo(String src) protected org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfoAll(List<RemoteLocation> locations, RemoteMethod method) Get the file info from all the locations.protected org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileInfoAll(List<RemoteLocation> locations, RemoteMethod method, long timeOutMs) Get the file info from all the locations.org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetFileLinkInfo(String src) getFileRemoteLocation(String path) org.apache.hadoop.ha.HAServiceProtocol.HAServiceStategetLinkTarget(String path) org.apache.hadoop.hdfs.protocol.DirectoryListinggetListing(String src, byte[] startAfter, boolean needLocation) protected List<RemoteResult<RemoteLocation,org.apache.hadoop.hdfs.protocol.DirectoryListing>> getListingInt(String src, byte[] startAfter, boolean needLocation) Get a partial listing of the indicated directory.org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatusgetLocatedFileInfo(String src, boolean needBlockToken) protected List<RemoteLocation>Get all the locations of the path forgetContentSummary(String).getMountPointDates(String path) Get the modification dates for mount points.protected org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetMountPointStatus(String name, int childrenNum, long date) Create a new file status for a mount point.protected org.apache.hadoop.hdfs.protocol.HdfsFileStatusgetMountPointStatus(String name, int childrenNum, long date, boolean setPath) Create a new file status for a mount point.longprotected static org.apache.hadoop.fs.permission.FsPermissiongetParentPermission(org.apache.hadoop.fs.permission.FsPermission mask) Get the permissions for the parent of a child with given permissions.longorg.apache.hadoop.fs.QuotaUsagegetQuotaUsage(String path) protected RemoteParamgetRenameDestinations(List<RemoteLocation> srcLocations, List<RemoteLocation> dstLocations) Determines combinations of eligible src/dst locations for a rename.org.apache.hadoop.hdfs.protocol.ReplicatedBlockStatsintorg.apache.hadoop.fs.FsServerDefaultslonglongorg.apache.hadoop.hdfs.protocol.DatanodeInfo[]org.apache.hadoop.hdfs.protocol.SnapshotDiffReportgetSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListinggetSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) org.apache.hadoop.hdfs.protocol.SnapshotStatus[]getSnapshotListing(String snapshotRoot) org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[]long[]getStats()org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[]org.apache.hadoop.hdfs.protocol.BlockStoragePolicygetStoragePolicy(String path) List<org.apache.hadoop.fs.XAttr>booleanbooleanisFileClosed(String src) booleanChecks if the path is a directory and is supposed to be present in all subclusters.protected static booleanCheck if an exception is caused by an unavailable subcluster or not.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry>listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry>listCachePools(String prevKey) org.apache.hadoop.hdfs.protocol.CorruptFileBlockslistCorruptFileBlocks(String path, String cookie) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone>listEncryptionZones(long prevId) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId) Deprecated.org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry>listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus>listReencryptionStatus(long prevId) List<org.apache.hadoop.fs.XAttr>listXAttrs(String src) protected org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]mergeDtanodeStorageReport(Map<String, org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> dnSubcluster) voidbooleanvoidmodifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidmodifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) voidmodifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) voidmsync()booleanrecoverLease(String src, String clientName) voidreencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) voidvoidvoidremoveAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) voidremoveCacheDirective(long id) voidremoveCachePool(String cachePoolName) voidremoveDefaultAcl(String src) voidremoveErasureCodingPolicy(String ecPolicyName) voidremoveXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) booleanDeprecated.voidvoidrenameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) longrenewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) voidrenewLease(String clientName, List<String> namespaces) voidreportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) booleanlongorg.apache.hadoop.hdfs.protocol.RollingUpgradeInforollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) voidsatisfyStoragePolicy(String path) booleansaveNamespace(long timeWindow, long txGap) voidvoidsetBalancerBandwidth(long bandwidth) voidsetErasureCodingPolicy(String src, String ecPolicyName) voidvoidsetPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) voidsetQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) booleansetReplication(String src, short replication) booleansetSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) voidsetServerDefaultsLastUpdate(long serverDefaultsLastUpdate) voidsetStoragePolicy(String src, String policyName) voidvoidsetXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) protected static booleanshouldAddMountPoint(byte[] mountPoint, byte[] lastEntry, byte[] startAfter, int remainingEntries) Check if we should add the mount point into the total listing.booleanvoidvoidunsetStoragePolicy(String src) org.apache.hadoop.hdfs.protocol.LocatedBlockupdateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) voidupdatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) Datanode are not verified to be in the same nameservice as the old block.boolean
-
Constructor Details
-
RouterClientProtocol
-
-
Method Details
-
getDelegationToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDelegationTokens
public Map<FederationNamespaceInfo,org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>> getDelegationTokens(org.apache.hadoop.io.Text renewer) throws IOException The the delegation token from each name service.- Parameters:
renewer- The token renewer.- Returns:
- Name service to Token.
- Throws:
IOException- If it cannot get the delegation token.
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
renewDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException - Specified by:
cancelDelegationTokenin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBlockLocations
public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException - Specified by:
getBlockLocationsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getServerDefaults
- Specified by:
getServerDefaultsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
create
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException - Specified by:
createin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
checkFaultTolerantRetry
protected List<RemoteLocation> checkFaultTolerantRetry(RemoteMethod method, String src, IOException ioe, RemoteLocation excludeLoc, List<RemoteLocation> locations) throws IOException Check if a remote method can be retried in other subclusters when it failed in the original destination. This method returns the list of locations to retry in. This is used by fault tolerant mount points.- Parameters:
method- Method that failed and might be retried.src- Path where the method was invoked.ioe- Exception that was triggered.excludeLoc- Location that failed and should be excluded.locations- All the locations to retry.- Returns:
- The locations where we should retry (excluding the failed ones).
- Throws:
IOException- If this path is not fault tolerant or the exception should not be retried (e.g., NSQuotaExceededException).
-
append
public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException - Specified by:
appendin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
recoverLease
- Specified by:
recoverLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setReplication
- Specified by:
setReplicationin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setStoragePolicy
- Specified by:
setStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicies
- Specified by:
getStoragePoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setPermission
public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException - Specified by:
setPermissionin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setOwner
- Specified by:
setOwnerin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addBlock
public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.- Specified by:
addBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAdditionalDatanode
public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.- Specified by:
getAdditionalDatanodein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
abandonBlock
public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException - Specified by:
abandonBlockin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
complete
public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException - Specified by:
completein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
updateBlockForPipeline
public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException - Specified by:
updateBlockForPipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
updatePipeline
public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException Datanode are not verified to be in the same nameservice as the old block. TODO This may require validation.- Specified by:
updatePipelinein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getPreferredBlockSize
- Specified by:
getPreferredBlockSizein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename
Deprecated.- Specified by:
renamein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rename2
public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException - Specified by:
rename2in interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
concat
- Specified by:
concatin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
truncate
- Specified by:
truncatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
delete
- Specified by:
deletein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
mkdirs
public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException - Specified by:
mkdirsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renewLease
- Specified by:
renewLeasein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getComparator
-
getListing
public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getBatchedListing
public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException - Specified by:
getBatchedListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileInfo
- Specified by:
getFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileRemoteLocation
- Throws:
IOException
-
isFileClosed
- Specified by:
isFileClosedin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getFileLinkInfo
public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException - Specified by:
getFileLinkInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLocatedFileInfo
public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException - Specified by:
getLocatedFileInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStats
- Specified by:
getStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeReport
public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException - Specified by:
getDatanodeStorageReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDatanodeStorageReport
public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException - Throws:
IOException
-
mergeDtanodeStorageReport
-
setSafeMode
public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException - Specified by:
setSafeModein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
restoreFailedStorage
- Specified by:
restoreFailedStoragein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
saveNamespace
- Specified by:
saveNamespacein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollEdits
- Specified by:
rollEditsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
refreshNodes
- Specified by:
refreshNodesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
finalizeUpgrade
- Specified by:
finalizeUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
upgradeStatus
- Specified by:
upgradeStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
rollingUpgrade
public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException - Specified by:
rollingUpgradein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
metaSave
- Specified by:
metaSavein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCorruptFileBlocks
public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException - Specified by:
listCorruptFileBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setBalancerBandwidth
- Specified by:
setBalancerBandwidthin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLocationsForContentSummary
@VisibleForTesting protected List<RemoteLocation> getLocationsForContentSummary(String path) throws IOException Get all the locations of the path forgetContentSummary(String). For example, there are some mount points:/a - [ns0 - /a] /a/b - [ns0 - /a/b] /a/b/c - [ns1 - /a/b/c]
When the path is '/a', the result of locations should be [RemoteLocation('/a', ns0, '/a'), RemoteLocation('/a/b/c', ns1, '/a/b/c')] When the path is '/b', will throw NoLocationException.- Parameters:
path- the path to get content summary- Returns:
- one list contains all the remote location
- Throws:
IOException- if an I/O error occurs
-
getContentSummary
- Specified by:
getContentSummaryin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
fsync
public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException - Specified by:
fsyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setTimes
- Specified by:
setTimesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSymlink
public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException - Specified by:
createSymlinkin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getLinkTarget
- Specified by:
getLinkTargetin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
allowSnapshot
- Specified by:
allowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disallowSnapshot
- Specified by:
disallowSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
renameSnapshot
public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException - Specified by:
renameSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshottableDirListing
public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException- Specified by:
getSnapshottableDirListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotListing
public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException - Specified by:
getSnapshotListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReport
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException - Specified by:
getSnapshotDiffReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSnapshotDiffReportListing
public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException - Specified by:
getSnapshotDiffReportListingin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCacheDirective
public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
addCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCacheDirective
public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException - Specified by:
modifyCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCacheDirective
- Specified by:
removeCacheDirectivein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCacheDirectives
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException - Specified by:
listCacheDirectivesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addCachePool
- Specified by:
addCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyCachePool
- Specified by:
modifyCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeCachePool
- Specified by:
removeCachePoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listCachePools
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException - Specified by:
listCachePoolsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
modifyAclEntries
public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
modifyAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAclEntries
public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
removeAclEntriesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeDefaultAcl
- Specified by:
removeDefaultAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeAcl
- Specified by:
removeAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setAcl
public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException - Specified by:
setAclin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getAclStatus
- Specified by:
getAclStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createEncryptionZone
- Specified by:
createEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEZForPath
- Specified by:
getEZForPathin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listEncryptionZones
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException - Specified by:
listEncryptionZonesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reencryptEncryptionZone
public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException - Specified by:
reencryptEncryptionZonein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listReencryptionStatus
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException - Specified by:
listReencryptionStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setXAttr
public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException - Specified by:
setXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getXAttrs
public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException - Specified by:
getXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listXAttrs
- Specified by:
listXAttrsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeXAttr
- Specified by:
removeXAttrin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
checkAccess
public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException - Specified by:
checkAccessin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getCurrentEditLogTxid
- Specified by:
getCurrentEditLogTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEditsFromTxid
- Specified by:
getEditsFromTxidin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getDataEncryptionKey
public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException- Specified by:
getDataEncryptionKeyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
createSnapshot
- Specified by:
createSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
deleteSnapshot
- Specified by:
deleteSnapshotin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setQuota
public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException - Specified by:
setQuotain interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getQuotaUsage
- Specified by:
getQuotaUsagein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
reportBadBlocks
public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException - Specified by:
reportBadBlocksin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetStoragePolicy
- Specified by:
unsetStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getStoragePolicy
public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException - Specified by:
getStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException- Specified by:
getErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingCodecs
- Specified by:
getErasureCodingCodecsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
addErasureCodingPolicies
public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException - Specified by:
addErasureCodingPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
removeErasureCodingPolicy
- Specified by:
removeErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
disableErasureCodingPolicy
- Specified by:
disableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
enableErasureCodingPolicy
- Specified by:
enableErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getErasureCodingPolicy
public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException - Specified by:
getErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
setErasureCodingPolicy
- Specified by:
setErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
unsetErasureCodingPolicy
- Specified by:
unsetErasureCodingPolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECTopologyResultForPolicies
public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException - Specified by:
getECTopologyResultForPoliciesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getECBlockGroupStats
- Specified by:
getECBlockGroupStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getReplicatedBlockStats
public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException- Specified by:
getReplicatedBlockStatsin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
@Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException Deprecated.- Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
listOpenFiles
public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException - Specified by:
listOpenFilesin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
msync
- Specified by:
msyncin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
satisfyStoragePolicy
- Specified by:
satisfyStoragePolicyin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getSlowDatanodeReport
- Specified by:
getSlowDatanodeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getEnclosingRoot
- Specified by:
getEnclosingRootin interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol- Throws:
IOException
-
getHAServiceState
public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState()- Specified by:
getHAServiceStatein interfaceorg.apache.hadoop.hdfs.protocol.ClientProtocol
-
getRenameDestinations
protected RemoteParam getRenameDestinations(List<RemoteLocation> srcLocations, List<RemoteLocation> dstLocations) throws IOException Determines combinations of eligible src/dst locations for a rename. A rename cannot change the namespace. Renames are only allowed if there is an eligible dst location in the same namespace as the source.- Parameters:
srcLocations- List of all potential source destinations where the path may be located. On return this list is trimmed to include only the paths that have corresponding destinations in the same namespace.dstLocations- The destination path- Returns:
- A map of all eligible source namespaces and their corresponding replacement value.
- Throws:
IOException- If the dst paths could not be determined.
-
aggregateContentSummary
protected org.apache.hadoop.fs.ContentSummary aggregateContentSummary(Collection<org.apache.hadoop.fs.ContentSummary> summaries) Aggregate content summaries for each subcluster. If the mount point has multiple destinations add the quota set value only once.- Parameters:
summaries- Collection of individual summaries.- Returns:
- Aggregated content summary.
-
getFileInfoAll
protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfoAll(List<RemoteLocation> locations, RemoteMethod method) throws IOException Get the file info from all the locations.- Parameters:
locations- Locations to check.method- The file information method to run.- Returns:
- The first file info if it's a file, the directory if it's everywhere.
- Throws:
IOException- If all the locations throw an exception.
-
getFileInfoAll
protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfoAll(List<RemoteLocation> locations, RemoteMethod method, long timeOutMs) throws IOException Get the file info from all the locations.- Parameters:
locations- Locations to check.method- The file information method to run.timeOutMs- Time out for the operation in milliseconds.- Returns:
- The first file info if it's a file, the directory if it's everywhere.
- Throws:
IOException- If all the locations throw an exception.
-
getParentPermission
protected static org.apache.hadoop.fs.permission.FsPermission getParentPermission(org.apache.hadoop.fs.permission.FsPermission mask) Get the permissions for the parent of a child with given permissions. Add implicit u+wx permission for parent. This is based on FSDirMkdirOp#addImplicitUwx.- Parameters:
mask- The permission mask of the child.- Returns:
- The permission mask of the parent.
-
getMountPointStatus
@VisibleForTesting protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getMountPointStatus(String name, int childrenNum, long date) Create a new file status for a mount point.- Parameters:
name- Name of the mount point.childrenNum- Number of children.date- Map with the dates.- Returns:
- New HDFS file status representing a mount point.
-
getMountPointStatus
@VisibleForTesting protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getMountPointStatus(String name, int childrenNum, long date, boolean setPath) Create a new file status for a mount point.- Parameters:
name- Name of the mount point.childrenNum- Number of children.date- Map with the dates.setPath- if true should set path in HdfsFileStatus- Returns:
- New HDFS file status representing a mount point.
-
getMountPointDates
Get the modification dates for mount points.- Parameters:
path- Name of the path to start checking dates from.- Returns:
- Map with the modification dates for all sub-entries.
-
getListingInt
protected List<RemoteResult<RemoteLocation,org.apache.hadoop.hdfs.protocol.DirectoryListing>> getListingInt(String src, byte[] startAfter, boolean needLocation) throws IOException Get a partial listing of the indicated directory.- Parameters:
src- the directory namestartAfter- the name to start afterneedLocation- if blockLocations need to be returned- Returns:
- a partial listing starting after startAfter
- Throws:
IOException- if other I/O error occurred
-
shouldAddMountPoint
protected static boolean shouldAddMountPoint(byte[] mountPoint, byte[] lastEntry, byte[] startAfter, int remainingEntries) Check if we should add the mount point into the total listing. This should be done under either of the two cases: 1) current mount point is between startAfter and cutoff lastEntry. 2) there are no remaining entries from subclusters and this mount point is bigger than all files from subclusters This is to make sure that the following batch of getListing call will use the correct startAfter, which is lastEntry from subcluster.- Parameters:
mountPoint- to be added mount point inside routerlastEntry- biggest listing from subclusterstartAfter- starting listing from client, used to define listing start boundaryremainingEntries- how many entries left from subcluster- Returns:
- true if should add mount point, otherwise false;
-
isMultiDestDirectory
Checks if the path is a directory and is supposed to be present in all subclusters.- Parameters:
src- the source path- Returns:
- true if the path is directory and is supposed to be present in all subclusters else false in all other scenarios.
- Throws:
IOException- if unable to get the file status.
-
getRouterFederationRenameCount
public int getRouterFederationRenameCount() -
getRpcServer
-
getRpcClient
-
getSubclusterResolver
-
getNamenodeResolver
-
getServerDefaultsLastUpdate
public long getServerDefaultsLastUpdate() -
getServerDefaultsValidityPeriod
public long getServerDefaultsValidityPeriod() -
isAllowPartialList
public boolean isAllowPartialList() -
getMountStatusTimeOut
public long getMountStatusTimeOut() -
getSuperUser
-
getSuperGroup
-
getStoragePolicy
-
setServerDefaultsLastUpdate
public void setServerDefaultsLastUpdate(long serverDefaultsLastUpdate) -
getRbfRename
-
getSecurityManager
-