Class RouterRpcServer

java.lang.Object
org.apache.hadoop.service.AbstractService
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer
All Implemented Interfaces:
Closeable, AutoCloseable, org.apache.hadoop.hdfs.protocol.ClientProtocol, org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol, org.apache.hadoop.security.RefreshUserMappingsProtocol, org.apache.hadoop.service.Service, org.apache.hadoop.tools.GetUserMappingsProtocol

public class RouterRpcServer extends org.apache.hadoop.service.AbstractService implements org.apache.hadoop.hdfs.protocol.ClientProtocol, org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol, org.apache.hadoop.security.RefreshUserMappingsProtocol, org.apache.hadoop.tools.GetUserMappingsProtocol
This class is responsible for handling all of the RPC calls to the It is created, started, and stopped by Router. It implements the ClientProtocol to mimic a NameNode and proxies the requests to the active NameNode.
  • Field Details

    • CONCURRENT_NS

      public static final String CONCURRENT_NS
      Name service keyword to identify fan-out calls.
      See Also:
  • Constructor Details

    • RouterRpcServer

      public RouterRpcServer(org.apache.hadoop.conf.Configuration conf, Router router, ActiveNamenodeResolver nnResolver, FileSubclusterResolver fileResolver) throws IOException
      Construct a router RPC server.
      Parameters:
      conf - HDFS Configuration.
      router - A router using this RPC server.
      nnResolver - The NN resolver instance to determine active NNs in HA.
      fileResolver - File resolver to resolve file paths to subclusters.
      Throws:
      IOException - If the RPC server could not be created.
  • Method Details

    • initAsyncThreadPools

      public void initAsyncThreadPools(org.apache.hadoop.conf.Configuration configuration)
      Init router async handlers and router async responders.
      Parameters:
      configuration - the configuration.
    • serviceInit

      protected void serviceInit(org.apache.hadoop.conf.Configuration configuration) throws Exception
      Overrides:
      serviceInit in class org.apache.hadoop.service.AbstractService
      Throws:
      Exception
    • serviceStart

      protected void serviceStart() throws Exception
      Overrides:
      serviceStart in class org.apache.hadoop.service.AbstractService
      Throws:
      Exception
    • serviceStop

      protected void serviceStop() throws Exception
      Overrides:
      serviceStop in class org.apache.hadoop.service.AbstractService
      Throws:
      Exception
    • getRouterStateIdContext

      @VisibleForTesting public RouterStateIdContext getRouterStateIdContext()
      Get the routerStateIdContext used by this server.
      Returns:
      routerStateIdContext
    • getRouterSecurityManager

      public RouterSecurityManager getRouterSecurityManager()
      Get the RPC security manager.
      Returns:
      RPC security manager.
    • getRPCClient

      public RouterRpcClient getRPCClient()
      Get the RPC client to the Namenode.
      Returns:
      RPC clients to the Namenodes.
    • getSubclusterResolver

      public FileSubclusterResolver getSubclusterResolver()
      Get the subcluster resolver.
      Returns:
      Subcluster resolver.
    • getNamenodeResolver

      public ActiveNamenodeResolver getNamenodeResolver()
      Get the active namenode resolver.
      Returns:
      Active namenode resolver.
    • getRPCMonitor

      public RouterRpcMonitor getRPCMonitor()
      Get the RPC monitor and metrics.
      Returns:
      RPC monitor and metrics.
    • getServer

      @VisibleForTesting public org.apache.hadoop.ipc.RPC.Server getServer()
      Allow access to the client RPC server for testing.
      Returns:
      The RPC server.
    • getRpcAddress

      public InetSocketAddress getRpcAddress()
      Get the RPC address of the service.
      Returns:
      RPC service address.
    • checkOperation

      public void checkOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op, boolean supported) throws org.apache.hadoop.ipc.StandbyException, UnsupportedOperationException
      Check if the Router is in safe mode. We should only see READ, WRITE, and UNCHECKED. It includes a default handler when we haven't implemented an operation. If not supported, it always throws an exception reporting the operation.
      Parameters:
      op - Category of the operation to check.
      supported - If the operation is supported or not. If not, it will throw an UnsupportedOperationException.
      Throws:
      org.apache.hadoop.ipc.StandbyException - If the Router is in safe mode and cannot serve client requests.
      UnsupportedOperationException - If the operation is not supported.
    • checkOperation

      public void checkOperation(org.apache.hadoop.hdfs.server.namenode.NameNode.OperationCategory op) throws org.apache.hadoop.ipc.StandbyException
      Check if the Router is in safe mode. We should only see READ, WRITE, and UNCHECKED. This function should be called by all ClientProtocol functions.
      Parameters:
      op - Category of the operation to check.
      Throws:
      org.apache.hadoop.ipc.StandbyException - If the Router is in safe mode and cannot serve client requests.
    • invokeAtAvailableNsAsync

      public <T> T invokeAtAvailableNsAsync(RemoteMethod method, Class<T> clazz) throws IOException
      Invokes the method at default namespace, if default namespace is not available then at the other available namespaces. If the namespace is unavailable, retry with other namespaces. Asynchronous version of invokeAtAvailableNs method.
      Type Parameters:
      T - expected return type.
      Parameters:
      method - the remote method.
      clazz - the type of return value.
      Returns:
      the response received after invoking method.
      Throws:
      IOException - if there is no namespace available or other ioExceptions.
    • getDelegationToken

      public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
      Specified by:
      getDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renewDelegationToken

      public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException
      Specified by:
      renewDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • cancelDelegationToken

      public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException
      Specified by:
      cancelDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBlockLocations

      public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
      Specified by:
      getBlockLocations in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getServerDefaults

      public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Specified by:
      getServerDefaults in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • create

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException
      Specified by:
      create in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getCreateLocationAsync

      public RemoteLocation getCreateLocationAsync(String src, List<RemoteLocation> locations) throws IOException
      Get the location to create a file. It checks if the file already existed in one of the locations. Asynchronous version of getCreateLocation method.
      Parameters:
      src - Path of the file to check.
      locations - Prefetched locations for the file.
      Returns:
      The remote location for this file.
      Throws:
      IOException - If the file has no creation location.
    • append

      public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException
      Specified by:
      append in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • recoverLease

      public boolean recoverLease(String src, String clientName) throws IOException
      Specified by:
      recoverLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setReplication

      public boolean setReplication(String src, short replication) throws IOException
      Specified by:
      setReplication in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setStoragePolicy

      public void setStoragePolicy(String src, String policyName) throws IOException
      Specified by:
      setStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicies

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[] getStoragePolicies() throws IOException
      Specified by:
      getStoragePolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setPermission

      public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException
      Specified by:
      setPermission in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setOwner

      public void setOwner(String src, String username, String groupname) throws IOException
      Specified by:
      setOwner in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addBlock

      public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException
      Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.
      Specified by:
      addBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAdditionalDatanode

      public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException
      Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.
      Specified by:
      getAdditionalDatanode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • abandonBlock

      public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException
      Specified by:
      abandonBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • complete

      public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException
      Specified by:
      complete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • updateBlockForPipeline

      public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException
      Specified by:
      updateBlockForPipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • updatePipeline

      public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException
      Datanode are not verified to be in the same nameservice as the old block. TODO This may require validation.
      Specified by:
      updatePipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getPreferredBlockSize

      public long getPreferredBlockSize(String src) throws IOException
      Specified by:
      getPreferredBlockSize in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename

      @Deprecated public boolean rename(String src, String dst) throws IOException
      Deprecated.
      Specified by:
      rename in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename2

      public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      Specified by:
      rename2 in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • concat

      public void concat(String trg, String[] src) throws IOException
      Specified by:
      concat in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • truncate

      public boolean truncate(String src, long newLength, String clientName) throws IOException
      Specified by:
      truncate in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • delete

      public boolean delete(String src, boolean recursive) throws IOException
      Specified by:
      delete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • mkdirs

      public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
      Specified by:
      mkdirs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renewLease

      public void renewLease(String clientName, List<String> namespaces) throws IOException
      Specified by:
      renewLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getListing

      public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBatchedListing

      public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getBatchedListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfo(String src) throws IOException
      Specified by:
      getFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • isFileClosed

      public boolean isFileClosed(String src) throws IOException
      Specified by:
      isFileClosed in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileLinkInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException
      Specified by:
      getFileLinkInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLocatedFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException
      Specified by:
      getLocatedFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStats

      public long[] getStats() throws IOException
      Specified by:
      getStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException
      Get the datanode report with a timeout.
      Parameters:
      type - Type of the datanode.
      requireResponse - If we require all the namespaces to report.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      List of datanodes.
      Throws:
      IOException - If it cannot get the report.
    • getDatanodeReportAsync

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReportAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException
      Get the datanode report with a timeout. Asynchronous version of the getDatanodeReport method.
      Parameters:
      type - Type of the datanode.
      requireResponse - If we require all the namespaces to report.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      List of datanodes.
      Throws:
      IOException - If it cannot get the report.
    • getDatanodeStorageReport

      public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeStorageReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeStorageReportMap

      public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Get the list of datanodes per subcluster.
      Parameters:
      type - Type of the datanodes to get.
      Returns:
      nsId to datanode list.
      Throws:
      IOException - If the method cannot be invoked remotely.
    • getDatanodeStorageReportMapAsync

      public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Throws:
      IOException
    • getDatanodeStorageReportMap

      public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMap(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException
      Get the list of datanodes per subcluster.
      Parameters:
      type - Type of the datanodes to get.
      requireResponse - If true an exception will be thrown if all calls do not complete. If false exceptions are ignored and all data results successfully received are returned.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      nsId to datanode list.
      Throws:
      IOException - If the method cannot be invoked remotely.
    • getDatanodeStorageReportMapAsync

      public Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> getDatanodeStorageReportMapAsync(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException
      Get the list of datanodes per subcluster. Asynchronous version of getDatanodeStorageReportMap method.
      Parameters:
      type - Type of the datanodes to get.
      requireResponse - If true an exception will be thrown if all calls do not complete. If false exceptions are ignored and all data results successfully received are returned.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      nsId to datanode list.
      Throws:
      IOException - If the method cannot be invoked remotely.
    • setSafeMode

      public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Specified by:
      setSafeMode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • restoreFailedStorage

      public boolean restoreFailedStorage(String arg) throws IOException
      Specified by:
      restoreFailedStorage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • saveNamespace

      public boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Specified by:
      saveNamespace in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollEdits

      public long rollEdits() throws IOException
      Specified by:
      rollEdits in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • refreshNodes

      public void refreshNodes() throws IOException
      Specified by:
      refreshNodes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • finalizeUpgrade

      public void finalizeUpgrade() throws IOException
      Specified by:
      finalizeUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • upgradeStatus

      public boolean upgradeStatus() throws IOException
      Specified by:
      upgradeStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollingUpgrade

      public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException
      Specified by:
      rollingUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • metaSave

      public void metaSave(String filename) throws IOException
      Specified by:
      metaSave in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCorruptFileBlocks

      public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException
      Specified by:
      listCorruptFileBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth) throws IOException
      Specified by:
      setBalancerBandwidth in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getContentSummary

      public org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
      Specified by:
      getContentSummary in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • fsync

      public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException
      Specified by:
      fsync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setTimes

      public void setTimes(String src, long mtime, long atime) throws IOException
      Specified by:
      setTimes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSymlink

      public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException
      Specified by:
      createSymlink in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLinkTarget

      public String getLinkTarget(String path) throws IOException
      Specified by:
      getLinkTarget in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • allowSnapshot

      public void allowSnapshot(String snapshotRoot) throws IOException
      Specified by:
      allowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disallowSnapshot

      public void disallowSnapshot(String snapshot) throws IOException
      Specified by:
      disallowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renameSnapshot

      public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException
      Specified by:
      renameSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshottableDirListing

      public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Specified by:
      getSnapshottableDirListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotListing

      public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException
      Specified by:
      getSnapshotListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReport

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException
      Specified by:
      getSnapshotDiffReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReportListing

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException
      Specified by:
      getSnapshotDiffReportListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCacheDirective

      public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      addCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCacheDirective

      public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      modifyCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCacheDirective

      public void removeCacheDirective(long id) throws IOException
      Specified by:
      removeCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCacheDirectives

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException
      Specified by:
      listCacheDirectives in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCachePool

      public void addCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      addCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCachePool

      public void modifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      modifyCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCachePool

      public void removeCachePool(String cachePoolName) throws IOException
      Specified by:
      removeCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCachePools

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException
      Specified by:
      listCachePools in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyAclEntries

      public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      modifyAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAclEntries

      public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      removeAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeDefaultAcl

      public void removeDefaultAcl(String src) throws IOException
      Specified by:
      removeDefaultAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAcl

      public void removeAcl(String src) throws IOException
      Specified by:
      removeAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setAcl

      public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      setAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAclStatus

      public org.apache.hadoop.fs.permission.AclStatus getAclStatus(String src) throws IOException
      Specified by:
      getAclStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createEncryptionZone

      public void createEncryptionZone(String src, String keyName) throws IOException
      Specified by:
      createEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEZForPath

      public org.apache.hadoop.hdfs.protocol.EncryptionZone getEZForPath(String src) throws IOException
      Specified by:
      getEZForPath in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listEncryptionZones

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException
      Specified by:
      listEncryptionZones in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reencryptEncryptionZone

      public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException
      Specified by:
      reencryptEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listReencryptionStatus

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException
      Specified by:
      listReencryptionStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setXAttr

      public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Specified by:
      setXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getXAttrs

      public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException
      Specified by:
      getXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listXAttrs

      public List<org.apache.hadoop.fs.XAttr> listXAttrs(String src) throws IOException
      Specified by:
      listXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeXAttr

      public void removeXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) throws IOException
      Specified by:
      removeXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • checkAccess

      public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Specified by:
      checkAccess in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getCurrentEditLogTxid

      public long getCurrentEditLogTxid() throws IOException
      Specified by:
      getCurrentEditLogTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEditsFromTxid

      public org.apache.hadoop.hdfs.inotify.EventBatchList getEditsFromTxid(long txid) throws IOException
      Specified by:
      getEditsFromTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDataEncryptionKey

      public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException
      Specified by:
      getDataEncryptionKey in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSnapshot

      public String createSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      createSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • deleteSnapshot

      public void deleteSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      deleteSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setQuota

      public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException
      Specified by:
      setQuota in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getQuotaUsage

      public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(String path) throws IOException
      Specified by:
      getQuotaUsage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reportBadBlocks

      public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException
      Specified by:
      reportBadBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetStoragePolicy

      public void unsetStoragePolicy(String src) throws IOException
      Specified by:
      unsetStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicy

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException
      Specified by:
      getStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException
      Specified by:
      getErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingCodecs

      public Map<String,String> getErasureCodingCodecs() throws IOException
      Specified by:
      getErasureCodingCodecs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException
      Specified by:
      addErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeErasureCodingPolicy

      public void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      removeErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disableErasureCodingPolicy

      public void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      disableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • enableErasureCodingPolicy

      public void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      enableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicy

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException
      Specified by:
      getErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setErasureCodingPolicy

      public void setErasureCodingPolicy(String src, String ecPolicyName) throws IOException
      Specified by:
      setErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetErasureCodingPolicy

      public void unsetErasureCodingPolicy(String src) throws IOException
      Specified by:
      unsetErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECTopologyResultForPolicies

      public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Specified by:
      getECTopologyResultForPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECBlockGroupStats

      public org.apache.hadoop.hdfs.protocol.ECBlockGroupStats getECBlockGroupStats() throws IOException
      Specified by:
      getECBlockGroupStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getReplicatedBlockStats

      public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException
      Specified by:
      getReplicatedBlockStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException
      Deprecated.
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getHAServiceState

      public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState() throws IOException
      Specified by:
      getHAServiceState in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • msync

      public void msync() throws IOException
      Specified by:
      msync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(String path) throws IOException
      Specified by:
      satisfyStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSlowDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReport() throws IOException
      Specified by:
      getSlowDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEnclosingRoot

      public org.apache.hadoop.fs.Path getEnclosingRoot(String src) throws IOException
      Specified by:
      getEnclosingRoot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBlocks

      public org.apache.hadoop.hdfs.server.protocol.BlocksWithLocations getBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long hotBlockTimeInterval, org.apache.hadoop.fs.StorageType storageType) throws IOException
      Specified by:
      getBlocks in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getBlockKeys

      public org.apache.hadoop.hdfs.security.token.block.ExportedBlockKeys getBlockKeys() throws IOException
      Specified by:
      getBlockKeys in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getTransactionID

      public long getTransactionID() throws IOException
      Specified by:
      getTransactionID in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getMostRecentCheckpointTxId

      public long getMostRecentCheckpointTxId() throws IOException
      Specified by:
      getMostRecentCheckpointTxId in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getMostRecentNameNodeFileTxId

      public long getMostRecentNameNodeFileTxId(org.apache.hadoop.hdfs.server.namenode.NNStorage.NameNodeFile nnf) throws IOException
      Specified by:
      getMostRecentNameNodeFileTxId in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • rollEditLog

      public org.apache.hadoop.hdfs.server.namenode.CheckpointSignature rollEditLog() throws IOException
      Specified by:
      rollEditLog in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • versionRequest

      public org.apache.hadoop.hdfs.server.protocol.NamespaceInfo versionRequest() throws IOException
      Specified by:
      versionRequest in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • errorReport

      public void errorReport(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, int errorCode, String msg) throws IOException
      Specified by:
      errorReport in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • registerSubordinateNamenode

      public org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registerSubordinateNamenode(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) throws IOException
      Specified by:
      registerSubordinateNamenode in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • startCheckpoint

      public org.apache.hadoop.hdfs.server.protocol.NamenodeCommand startCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration) throws IOException
      Specified by:
      startCheckpoint in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • endCheckpoint

      public void endCheckpoint(org.apache.hadoop.hdfs.server.protocol.NamenodeRegistration registration, org.apache.hadoop.hdfs.server.namenode.CheckpointSignature sig) throws IOException
      Specified by:
      endCheckpoint in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getEditLogManifest

      public org.apache.hadoop.hdfs.server.protocol.RemoteEditLogManifest getEditLogManifest(long sinceTxId) throws IOException
      Specified by:
      getEditLogManifest in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • isUpgradeFinalized

      public boolean isUpgradeFinalized() throws IOException
      Specified by:
      isUpgradeFinalized in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • isRollingUpgrade

      public boolean isRollingUpgrade() throws IOException
      Specified by:
      isRollingUpgrade in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getNextSPSPath

      public Long getNextSPSPath() throws IOException
      Specified by:
      getNextSPSPath in interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
      Throws:
      IOException
    • getLocationsForPath

      public List<RemoteLocation> getLocationsForPath(String path, boolean failIfLocked) throws IOException
      Get the possible locations of a path in the federated cluster. During the get operation, it will do the quota verification.
      Parameters:
      path - Path to check.
      failIfLocked - Fail the request if locked (top mount point).
      Returns:
      Prioritized list of locations in the federated cluster.
      Throws:
      IOException - If the location for this path cannot be determined.
    • getLocationsForPath

      public List<RemoteLocation> getLocationsForPath(String path, boolean failIfLocked, boolean needQuotaVerify) throws IOException
      Get the possible locations of a path in the federated cluster.
      Parameters:
      path - Path to check.
      failIfLocked - Fail the request if there is any mount point under the path.
      needQuotaVerify - If need to do the quota verification.
      Returns:
      Prioritized list of locations in the federated cluster.
      Throws:
      IOException - If the location for this path cannot be determined.
    • getRemoteUser

      public static org.apache.hadoop.security.UserGroupInformation getRemoteUser() throws IOException
      Get the user that is invoking this operation.
      Returns:
      Remote user group information.
      Throws:
      IOException - If we cannot get the user information.
    • merge

      public static <T> T[] merge(Map<FederationNamespaceInfo,T[]> map, Class<T> clazz)
      Merge the outputs from multiple namespaces.
      Type Parameters:
      T - The type of the objects to merge.
      Parameters:
      map - Namespace to Output array.
      clazz - Class of the values.
      Returns:
      Array with the outputs.
    • getQuotaModule

      public Quota getQuotaModule()
      Get quota module implementation.
      Returns:
      Quota module implementation
    • getClientProtocolModule

      @VisibleForTesting public RouterClientProtocol getClientProtocolModule()
      Get ClientProtocol module implementation.
      Returns:
      ClientProtocol implementation
    • getRPCMetrics

      public FederationRPCMetrics getRPCMetrics()
      Get RPC metrics info.
      Returns:
      The instance of FederationRPCMetrics.
    • isPathAll

      public boolean isPathAll(String path)
      Check if a path should be in all subclusters.
      Parameters:
      path - Path to check.
      Returns:
      If a path should be in all subclusters.
    • isInvokeConcurrent

      public boolean isInvokeConcurrent(String path) throws IOException
      Check if call needs to be invoked to all the locations. The call is supposed to be invoked in all the locations in case the order of the mount entry is amongst HASH_ALL, RANDOM or SPACE or if the source is itself a mount entry.
      Parameters:
      path - The path on which the operation need to be invoked.
      Returns:
      true if the call is supposed to invoked on all locations.
      Throws:
      IOException - If an I/O error occurs.
    • refreshUserToGroupsMappings

      public void refreshUserToGroupsMappings() throws IOException
      Specified by:
      refreshUserToGroupsMappings in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
      Throws:
      IOException
    • refreshSuperUserGroupsConfiguration

      public void refreshSuperUserGroupsConfiguration() throws IOException
      Specified by:
      refreshSuperUserGroupsConfiguration in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
      Throws:
      IOException
    • getGroupsForUser

      public String[] getGroupsForUser(String user) throws IOException
      Specified by:
      getGroupsForUser in interface org.apache.hadoop.tools.GetUserMappingsProtocol
      Throws:
      IOException
    • getRouterFederationRenameCount

      public int getRouterFederationRenameCount()
    • getSchedulerJobCount

      public int getSchedulerJobCount()
    • refreshFairnessPolicyController

      public String refreshFairnessPolicyController()
    • getSlowDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReport(boolean requireResponse, long timeOutMs) throws IOException
      Get the slow running datanodes report with a timeout.
      Parameters:
      requireResponse - If we require all the namespaces to report.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      List of datanodes.
      Throws:
      IOException - If it cannot get the report.
    • getSlowDatanodeReportAsync

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReportAsync(boolean requireResponse, long timeOutMs) throws IOException
      Get the slow running datanodes report with a timeout. Asynchronous version of the getSlowDatanodeReport method.
      Parameters:
      requireResponse - If we require all the namespaces to report.
      timeOutMs - Time out for the reply in milliseconds.
      Returns:
      List of datanodes.
      Throws:
      IOException - If it cannot get the report.
    • isAsync

      public boolean isAsync()
    • getAsyncRouterHandlerExecutors

      public Map<String,ExecutorService> getAsyncRouterHandlerExecutors()
    • getRouterAsyncHandlerDefaultExecutor

      public ExecutorService getRouterAsyncHandlerDefaultExecutor()