Class RouterClientProtocol

java.lang.Object
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol
All Implemented Interfaces:
org.apache.hadoop.hdfs.protocol.ClientProtocol
Direct Known Subclasses:
RouterAsyncClientProtocol

public class RouterClientProtocol extends Object implements org.apache.hadoop.hdfs.protocol.ClientProtocol
Module that implements all the RPC calls in ClientProtocol in the RouterRpcServer.
  • Constructor Details

    • RouterClientProtocol

      public RouterClientProtocol(org.apache.hadoop.conf.Configuration conf, RouterRpcServer rpcServer)
  • Method Details

    • getDelegationToken

      public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
      Specified by:
      getDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDelegationTokens

      public Map<FederationNamespaceInfo,org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier>> getDelegationTokens(org.apache.hadoop.io.Text renewer) throws IOException
      The the delegation token from each name service.
      Parameters:
      renewer - The token renewer.
      Returns:
      Name service to Token.
      Throws:
      IOException - If it cannot get the delegation token.
    • renewDelegationToken

      public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException
      Specified by:
      renewDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • cancelDelegationToken

      public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException
      Specified by:
      cancelDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBlockLocations

      public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
      Specified by:
      getBlockLocations in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getServerDefaults

      public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Specified by:
      getServerDefaults in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • create

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException
      Specified by:
      create in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • isUnavailableSubclusterException

      protected static boolean isUnavailableSubclusterException(IOException ioe)
      Check if an exception is caused by an unavailable subcluster or not. It also checks the causes.
      Parameters:
      ioe - IOException to check.
      Returns:
      If caused by an unavailable subcluster. False if they should not be retried (e.g., NSQuotaExceededException).
    • checkFaultTolerantRetry

      protected List<RemoteLocation> checkFaultTolerantRetry(RemoteMethod method, String src, IOException ioe, RemoteLocation excludeLoc, List<RemoteLocation> locations) throws IOException
      Check if a remote method can be retried in other subclusters when it failed in the original destination. This method returns the list of locations to retry in. This is used by fault tolerant mount points.
      Parameters:
      method - Method that failed and might be retried.
      src - Path where the method was invoked.
      ioe - Exception that was triggered.
      excludeLoc - Location that failed and should be excluded.
      locations - All the locations to retry.
      Returns:
      The locations where we should retry (excluding the failed ones).
      Throws:
      IOException - If this path is not fault tolerant or the exception should not be retried (e.g., NSQuotaExceededException).
    • append

      public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException
      Specified by:
      append in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • recoverLease

      public boolean recoverLease(String src, String clientName) throws IOException
      Specified by:
      recoverLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setReplication

      public boolean setReplication(String src, short replication) throws IOException
      Specified by:
      setReplication in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setStoragePolicy

      public void setStoragePolicy(String src, String policyName) throws IOException
      Specified by:
      setStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicies

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[] getStoragePolicies() throws IOException
      Specified by:
      getStoragePolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setPermission

      public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException
      Specified by:
      setPermission in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setOwner

      public void setOwner(String src, String username, String groupname) throws IOException
      Specified by:
      setOwner in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addBlock

      public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException
      Excluded and favored nodes are not verified and will be ignored by placement policy if they are not in the same nameservice as the file.
      Specified by:
      addBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAdditionalDatanode

      public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException
      Excluded nodes are not verified and will be ignored by placement if they are not in the same nameservice as the file.
      Specified by:
      getAdditionalDatanode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • abandonBlock

      public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException
      Specified by:
      abandonBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • complete

      public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException
      Specified by:
      complete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • updateBlockForPipeline

      public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException
      Specified by:
      updateBlockForPipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • updatePipeline

      public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException
      Datanode are not verified to be in the same nameservice as the old block. TODO This may require validation.
      Specified by:
      updatePipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getPreferredBlockSize

      public long getPreferredBlockSize(String src) throws IOException
      Specified by:
      getPreferredBlockSize in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename

      @Deprecated public boolean rename(String src, String dst) throws IOException
      Deprecated.
      Specified by:
      rename in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename2

      public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      Specified by:
      rename2 in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • concat

      public void concat(String trg, String[] src) throws IOException
      Specified by:
      concat in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • truncate

      public boolean truncate(String src, long newLength, String clientName) throws IOException
      Specified by:
      truncate in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • delete

      public boolean delete(String src, boolean recursive) throws IOException
      Specified by:
      delete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • mkdirs

      public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
      Specified by:
      mkdirs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renewLease

      public void renewLease(String clientName, List<String> namespaces) throws IOException
      Specified by:
      renewLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getComparator

      public static RouterClientProtocol.GetListingComparator getComparator()
    • getListing

      public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBatchedListing

      public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getBatchedListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfo(String src) throws IOException
      Specified by:
      getFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileRemoteLocation

      public RemoteLocation getFileRemoteLocation(String path) throws IOException
      Throws:
      IOException
    • isFileClosed

      public boolean isFileClosed(String src) throws IOException
      Specified by:
      isFileClosed in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileLinkInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException
      Specified by:
      getFileLinkInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLocatedFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException
      Specified by:
      getLocatedFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStats

      public long[] getStats() throws IOException
      Specified by:
      getStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeStorageReport

      public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeStorageReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeStorageReport

      public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type, boolean requireResponse, long timeOutMs) throws IOException
      Throws:
      IOException
    • mergeDtanodeStorageReport

      protected org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] mergeDtanodeStorageReport(Map<String,org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[]> dnSubcluster)
    • setSafeMode

      public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Specified by:
      setSafeMode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • restoreFailedStorage

      public boolean restoreFailedStorage(String arg) throws IOException
      Specified by:
      restoreFailedStorage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • saveNamespace

      public boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Specified by:
      saveNamespace in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollEdits

      public long rollEdits() throws IOException
      Specified by:
      rollEdits in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • refreshNodes

      public void refreshNodes() throws IOException
      Specified by:
      refreshNodes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • finalizeUpgrade

      public void finalizeUpgrade() throws IOException
      Specified by:
      finalizeUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • upgradeStatus

      public boolean upgradeStatus() throws IOException
      Specified by:
      upgradeStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollingUpgrade

      public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException
      Specified by:
      rollingUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • metaSave

      public void metaSave(String filename) throws IOException
      Specified by:
      metaSave in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCorruptFileBlocks

      public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException
      Specified by:
      listCorruptFileBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth) throws IOException
      Specified by:
      setBalancerBandwidth in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLocationsForContentSummary

      @VisibleForTesting protected List<RemoteLocation> getLocationsForContentSummary(String path) throws IOException
      Get all the locations of the path for getContentSummary(String). For example, there are some mount points:

      /a - [ns0 - /a] /a/b - [ns0 - /a/b] /a/b/c - [ns1 - /a/b/c]

      When the path is '/a', the result of locations should be [RemoteLocation('/a', ns0, '/a'), RemoteLocation('/a/b/c', ns1, '/a/b/c')] When the path is '/b', will throw NoLocationException.
      Parameters:
      path - the path to get content summary
      Returns:
      one list contains all the remote location
      Throws:
      IOException - if an I/O error occurs
    • getContentSummary

      public org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
      Specified by:
      getContentSummary in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • fsync

      public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException
      Specified by:
      fsync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setTimes

      public void setTimes(String src, long mtime, long atime) throws IOException
      Specified by:
      setTimes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSymlink

      public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException
      Specified by:
      createSymlink in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLinkTarget

      public String getLinkTarget(String path) throws IOException
      Specified by:
      getLinkTarget in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • allowSnapshot

      public void allowSnapshot(String snapshotRoot) throws IOException
      Specified by:
      allowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disallowSnapshot

      public void disallowSnapshot(String snapshot) throws IOException
      Specified by:
      disallowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renameSnapshot

      public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException
      Specified by:
      renameSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshottableDirListing

      public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Specified by:
      getSnapshottableDirListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotListing

      public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException
      Specified by:
      getSnapshotListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReport

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException
      Specified by:
      getSnapshotDiffReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReportListing

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException
      Specified by:
      getSnapshotDiffReportListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCacheDirective

      public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      addCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCacheDirective

      public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      modifyCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCacheDirective

      public void removeCacheDirective(long id) throws IOException
      Specified by:
      removeCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCacheDirectives

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException
      Specified by:
      listCacheDirectives in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCachePool

      public void addCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      addCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCachePool

      public void modifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      modifyCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCachePool

      public void removeCachePool(String cachePoolName) throws IOException
      Specified by:
      removeCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCachePools

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException
      Specified by:
      listCachePools in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyAclEntries

      public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      modifyAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAclEntries

      public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      removeAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeDefaultAcl

      public void removeDefaultAcl(String src) throws IOException
      Specified by:
      removeDefaultAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAcl

      public void removeAcl(String src) throws IOException
      Specified by:
      removeAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setAcl

      public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      setAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAclStatus

      public org.apache.hadoop.fs.permission.AclStatus getAclStatus(String src) throws IOException
      Specified by:
      getAclStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createEncryptionZone

      public void createEncryptionZone(String src, String keyName) throws IOException
      Specified by:
      createEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEZForPath

      public org.apache.hadoop.hdfs.protocol.EncryptionZone getEZForPath(String src) throws IOException
      Specified by:
      getEZForPath in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listEncryptionZones

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException
      Specified by:
      listEncryptionZones in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reencryptEncryptionZone

      public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException
      Specified by:
      reencryptEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listReencryptionStatus

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException
      Specified by:
      listReencryptionStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setXAttr

      public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Specified by:
      setXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getXAttrs

      public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException
      Specified by:
      getXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listXAttrs

      public List<org.apache.hadoop.fs.XAttr> listXAttrs(String src) throws IOException
      Specified by:
      listXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeXAttr

      public void removeXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) throws IOException
      Specified by:
      removeXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • checkAccess

      public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Specified by:
      checkAccess in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getCurrentEditLogTxid

      public long getCurrentEditLogTxid() throws IOException
      Specified by:
      getCurrentEditLogTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEditsFromTxid

      public org.apache.hadoop.hdfs.inotify.EventBatchList getEditsFromTxid(long txid) throws IOException
      Specified by:
      getEditsFromTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDataEncryptionKey

      public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException
      Specified by:
      getDataEncryptionKey in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSnapshot

      public String createSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      createSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • deleteSnapshot

      public void deleteSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      deleteSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setQuota

      public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException
      Specified by:
      setQuota in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getQuotaUsage

      public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(String path) throws IOException
      Specified by:
      getQuotaUsage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reportBadBlocks

      public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException
      Specified by:
      reportBadBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetStoragePolicy

      public void unsetStoragePolicy(String src) throws IOException
      Specified by:
      unsetStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicy

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException
      Specified by:
      getStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException
      Specified by:
      getErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingCodecs

      public Map<String,String> getErasureCodingCodecs() throws IOException
      Specified by:
      getErasureCodingCodecs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException
      Specified by:
      addErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeErasureCodingPolicy

      public void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      removeErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disableErasureCodingPolicy

      public void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      disableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • enableErasureCodingPolicy

      public void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      enableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicy

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException
      Specified by:
      getErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setErasureCodingPolicy

      public void setErasureCodingPolicy(String src, String ecPolicyName) throws IOException
      Specified by:
      setErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetErasureCodingPolicy

      public void unsetErasureCodingPolicy(String src) throws IOException
      Specified by:
      unsetErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECTopologyResultForPolicies

      public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Specified by:
      getECTopologyResultForPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECBlockGroupStats

      public org.apache.hadoop.hdfs.protocol.ECBlockGroupStats getECBlockGroupStats() throws IOException
      Specified by:
      getECBlockGroupStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getReplicatedBlockStats

      public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException
      Specified by:
      getReplicatedBlockStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException
      Deprecated.
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • msync

      public void msync() throws IOException
      Specified by:
      msync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(String path) throws IOException
      Specified by:
      satisfyStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSlowDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReport() throws IOException
      Specified by:
      getSlowDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEnclosingRoot

      public org.apache.hadoop.fs.Path getEnclosingRoot(String src) throws IOException
      Specified by:
      getEnclosingRoot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getHAServiceState

      public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState()
      Specified by:
      getHAServiceState in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
    • getRenameDestinations

      protected RemoteParam getRenameDestinations(List<RemoteLocation> srcLocations, List<RemoteLocation> dstLocations) throws IOException
      Determines combinations of eligible src/dst locations for a rename. A rename cannot change the namespace. Renames are only allowed if there is an eligible dst location in the same namespace as the source.
      Parameters:
      srcLocations - List of all potential source destinations where the path may be located. On return this list is trimmed to include only the paths that have corresponding destinations in the same namespace.
      dstLocations - The destination path
      Returns:
      A map of all eligible source namespaces and their corresponding replacement value.
      Throws:
      IOException - If the dst paths could not be determined.
    • aggregateContentSummary

      protected org.apache.hadoop.fs.ContentSummary aggregateContentSummary(Collection<org.apache.hadoop.fs.ContentSummary> summaries)
      Aggregate content summaries for each subcluster. If the mount point has multiple destinations add the quota set value only once.
      Parameters:
      summaries - Collection of individual summaries.
      Returns:
      Aggregated content summary.
    • getFileInfoAll

      protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfoAll(List<RemoteLocation> locations, RemoteMethod method) throws IOException
      Get the file info from all the locations.
      Parameters:
      locations - Locations to check.
      method - The file information method to run.
      Returns:
      The first file info if it's a file, the directory if it's everywhere.
      Throws:
      IOException - If all the locations throw an exception.
    • getFileInfoAll

      protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfoAll(List<RemoteLocation> locations, RemoteMethod method, long timeOutMs) throws IOException
      Get the file info from all the locations.
      Parameters:
      locations - Locations to check.
      method - The file information method to run.
      timeOutMs - Time out for the operation in milliseconds.
      Returns:
      The first file info if it's a file, the directory if it's everywhere.
      Throws:
      IOException - If all the locations throw an exception.
    • getParentPermission

      protected static org.apache.hadoop.fs.permission.FsPermission getParentPermission(org.apache.hadoop.fs.permission.FsPermission mask)
      Get the permissions for the parent of a child with given permissions. Add implicit u+wx permission for parent. This is based on FSDirMkdirOp#addImplicitUwx.
      Parameters:
      mask - The permission mask of the child.
      Returns:
      The permission mask of the parent.
    • getMountPointStatus

      @VisibleForTesting protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getMountPointStatus(String name, int childrenNum, long date)
      Create a new file status for a mount point.
      Parameters:
      name - Name of the mount point.
      childrenNum - Number of children.
      date - Map with the dates.
      Returns:
      New HDFS file status representing a mount point.
    • getMountPointStatus

      @VisibleForTesting protected org.apache.hadoop.hdfs.protocol.HdfsFileStatus getMountPointStatus(String name, int childrenNum, long date, boolean setPath)
      Create a new file status for a mount point.
      Parameters:
      name - Name of the mount point.
      childrenNum - Number of children.
      date - Map with the dates.
      setPath - if true should set path in HdfsFileStatus
      Returns:
      New HDFS file status representing a mount point.
    • getMountPointDates

      protected Map<String,Long> getMountPointDates(String path)
      Get the modification dates for mount points.
      Parameters:
      path - Name of the path to start checking dates from.
      Returns:
      Map with the modification dates for all sub-entries.
    • getListingInt

      protected List<RemoteResult<RemoteLocation,org.apache.hadoop.hdfs.protocol.DirectoryListing>> getListingInt(String src, byte[] startAfter, boolean needLocation) throws IOException
      Get a partial listing of the indicated directory.
      Parameters:
      src - the directory name
      startAfter - the name to start after
      needLocation - if blockLocations need to be returned
      Returns:
      a partial listing starting after startAfter
      Throws:
      IOException - if other I/O error occurred
    • shouldAddMountPoint

      protected static boolean shouldAddMountPoint(byte[] mountPoint, byte[] lastEntry, byte[] startAfter, int remainingEntries)
      Check if we should add the mount point into the total listing. This should be done under either of the two cases: 1) current mount point is between startAfter and cutoff lastEntry. 2) there are no remaining entries from subclusters and this mount point is bigger than all files from subclusters This is to make sure that the following batch of getListing call will use the correct startAfter, which is lastEntry from subcluster.
      Parameters:
      mountPoint - to be added mount point inside router
      lastEntry - biggest listing from subcluster
      startAfter - starting listing from client, used to define listing start boundary
      remainingEntries - how many entries left from subcluster
      Returns:
      true if should add mount point, otherwise false;
    • isMultiDestDirectory

      @VisibleForTesting public boolean isMultiDestDirectory(String src) throws IOException
      Checks if the path is a directory and is supposed to be present in all subclusters.
      Parameters:
      src - the source path
      Returns:
      true if the path is directory and is supposed to be present in all subclusters else false in all other scenarios.
      Throws:
      IOException - if unable to get the file status.
    • getRouterFederationRenameCount

      public int getRouterFederationRenameCount()
    • getRpcServer

      public RouterRpcServer getRpcServer()
    • getRpcClient

      public RouterRpcClient getRpcClient()
    • getSubclusterResolver

      public FileSubclusterResolver getSubclusterResolver()
    • getNamenodeResolver

      public ActiveNamenodeResolver getNamenodeResolver()
    • getServerDefaultsLastUpdate

      public long getServerDefaultsLastUpdate()
    • getServerDefaultsValidityPeriod

      public long getServerDefaultsValidityPeriod()
    • isAllowPartialList

      public boolean isAllowPartialList()
    • getMountStatusTimeOut

      public long getMountStatusTimeOut()
    • getSuperUser

      public String getSuperUser()
    • getSuperGroup

      public String getSuperGroup()
    • getStoragePolicy

      public RouterStoragePolicy getStoragePolicy()
    • setServerDefaultsLastUpdate

      public void setServerDefaultsLastUpdate(long serverDefaultsLastUpdate)
    • getRbfRename

      public RouterFederationRename getRbfRename()
    • getSecurityManager

      public RouterSecurityManager getSecurityManager()