Class NameNodeRpcServer

java.lang.Object
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer
All Implemented Interfaces:
org.apache.hadoop.ha.HAServiceProtocol, org.apache.hadoop.hdfs.protocol.ClientProtocol, org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol, DatanodeLifelineProtocol, DatanodeProtocol, NamenodeProtocol, NamenodeProtocols, org.apache.hadoop.ipc.GenericRefreshProtocol, org.apache.hadoop.ipc.RefreshCallQueueProtocol, org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol, org.apache.hadoop.security.RefreshUserMappingsProtocol, org.apache.hadoop.tools.GetUserMappingsProtocol

@Private @VisibleForTesting public class NameNodeRpcServer extends Object implements NamenodeProtocols
This class is responsible for handling all of the RPC calls to the NameNode. It is created, started, and stopped by NameNode.
  • Field Details

    • namesystem

      protected final FSNamesystem namesystem
    • nn

      protected final NameNode nn
    • clientRpcServer

      protected final org.apache.hadoop.ipc.RPC.Server clientRpcServer
      The RPC server that listens to requests from clients
    • clientRpcAddress

      protected final InetSocketAddress clientRpcAddress
  • Constructor Details

  • Method Details

    • getClientRpcServer

      @VisibleForTesting public org.apache.hadoop.ipc.RPC.Server getClientRpcServer()
      Allow access to the client RPC server for testing
    • getRpcAddress

      @VisibleForTesting public InetSocketAddress getRpcAddress()
    • getAuxiliaryRpcAddresses

      @VisibleForTesting public Set<InetSocketAddress> getAuxiliaryRpcAddresses()
    • getBlocks

      public BlocksWithLocations getBlocks(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanode, long size, long minBlockSize, long timeInterval, org.apache.hadoop.fs.StorageType storageType) throws IOException
      Description copied from interface: NamenodeProtocol
      Get a list of blocks belonging to datanode whose total size equals size.
      Specified by:
      getBlocks in interface NamenodeProtocol
      Parameters:
      datanode - a data node
      size - requested size
      minBlockSize - each block should be of this minimum Block Size
      timeInterval - prefer to get blocks which are belong to the cold files accessed before the time interval
      storageType - the given storage type StorageType
      Returns:
      BlocksWithLocations a list of blocks & their locations
      Throws:
      IOException - if size is less than or equal to 0 or datanode does not exist
      See Also:
    • getBlockKeys

      public ExportedBlockKeys getBlockKeys() throws IOException
      Description copied from interface: NamenodeProtocol
      Get the current block keys
      Specified by:
      getBlockKeys in interface NamenodeProtocol
      Returns:
      ExportedBlockKeys containing current block keys
      Throws:
      IOException
    • errorReport

      public void errorReport(NamenodeRegistration registration, int errorCode, String msg) throws IOException
      Description copied from interface: NamenodeProtocol
      Report to the active name-node an error occurred on a subordinate node. Depending on the error code the active node may decide to unregister the reporting node.
      Specified by:
      errorReport in interface NamenodeProtocol
      Parameters:
      registration - requesting node.
      errorCode - indicates the error
      msg - free text description of the error
      Throws:
      IOException
    • registerSubordinateNamenode

      public NamenodeRegistration registerSubordinateNamenode(NamenodeRegistration registration) throws IOException
      Description copied from interface: NamenodeProtocol
      Register a subordinate name-node like backup node.
      Specified by:
      registerSubordinateNamenode in interface NamenodeProtocol
      Returns:
      NamenodeRegistration of the node, which this node has just registered with.
      Throws:
      IOException
    • startCheckpoint

      public NamenodeCommand startCheckpoint(NamenodeRegistration registration) throws IOException
      Description copied from interface: NamenodeProtocol
      A request to the active name-node to start a checkpoint. The name-node should decide whether to admit it or reject. The name-node also decides what should be done with the backup node image before and after the checkpoint.
      Specified by:
      startCheckpoint in interface NamenodeProtocol
      Parameters:
      registration - the requesting node
      Returns:
      CheckpointCommand if checkpoint is allowed.
      Throws:
      IOException
      See Also:
    • endCheckpoint

      public void endCheckpoint(NamenodeRegistration registration, CheckpointSignature sig) throws IOException
      Description copied from interface: NamenodeProtocol
      A request to the active name-node to finalize previously started checkpoint.
      Specified by:
      endCheckpoint in interface NamenodeProtocol
      Parameters:
      registration - the requesting node
      sig - CheckpointSignature which identifies the checkpoint.
      Throws:
      IOException
    • getDelegationToken

      public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException
      Specified by:
      getDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renewDelegationToken

      public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws org.apache.hadoop.security.token.SecretManager.InvalidToken, IOException
      Specified by:
      renewDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      org.apache.hadoop.security.token.SecretManager.InvalidToken
      IOException
    • cancelDelegationToken

      public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenIdentifier> token) throws IOException
      Specified by:
      cancelDelegationToken in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBlockLocations

      public org.apache.hadoop.hdfs.protocol.LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
      Specified by:
      getBlockLocations in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getServerDefaults

      public org.apache.hadoop.fs.FsServerDefaults getServerDefaults() throws IOException
      Specified by:
      getServerDefaults in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • create

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus create(String src, org.apache.hadoop.fs.permission.FsPermission masked, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag, boolean createParent, short replication, long blockSize, org.apache.hadoop.crypto.CryptoProtocolVersion[] supportedVersions, String ecPolicyName, String storagePolicy) throws IOException
      Specified by:
      create in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • append

      public org.apache.hadoop.hdfs.protocol.LastBlockWithStatus append(String src, String clientName, org.apache.hadoop.io.EnumSetWritable<org.apache.hadoop.fs.CreateFlag> flag) throws IOException
      Specified by:
      append in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • recoverLease

      public boolean recoverLease(String src, String clientName) throws IOException
      Specified by:
      recoverLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setReplication

      public boolean setReplication(String src, short replication) throws IOException
      Specified by:
      setReplication in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetStoragePolicy

      public void unsetStoragePolicy(String src) throws IOException
      Specified by:
      unsetStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setStoragePolicy

      public void setStoragePolicy(String src, String policyName) throws IOException
      Specified by:
      setStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicy

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy getStoragePolicy(String path) throws IOException
      Specified by:
      getStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStoragePolicies

      public org.apache.hadoop.hdfs.protocol.BlockStoragePolicy[] getStoragePolicies() throws IOException
      Specified by:
      getStoragePolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setPermission

      public void setPermission(String src, org.apache.hadoop.fs.permission.FsPermission permissions) throws IOException
      Specified by:
      setPermission in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setOwner

      public void setOwner(String src, String username, String groupname) throws IOException
      Specified by:
      setOwner in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addBlock

      public org.apache.hadoop.hdfs.protocol.LocatedBlock addBlock(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock previous, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludedNodes, long fileId, String[] favoredNodes, EnumSet<org.apache.hadoop.hdfs.AddBlockFlag> addBlockFlags) throws IOException
      Specified by:
      addBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAdditionalDatanode

      public org.apache.hadoop.hdfs.protocol.LocatedBlock getAdditionalDatanode(String src, long fileId, org.apache.hadoop.hdfs.protocol.ExtendedBlock blk, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] existings, String[] existingStorageIDs, org.apache.hadoop.hdfs.protocol.DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) throws IOException
      Specified by:
      getAdditionalDatanode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • abandonBlock

      public void abandonBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, long fileId, String src, String holder) throws IOException
      The client needs to give up on the block.
      Specified by:
      abandonBlock in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • complete

      public boolean complete(String src, String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock last, long fileId) throws IOException
      Specified by:
      complete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reportBadBlocks

      public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.LocatedBlock[] blocks) throws IOException
      The client has detected an error on the specified located blocks and is reporting them to the server. For now, the namenode will mark the block as corrupt. In the future we might check the blocks are actually corrupt.
      Specified by:
      reportBadBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Specified by:
      reportBadBlocks in interface DatanodeProtocol
      Throws:
      IOException
    • updateBlockForPipeline

      public org.apache.hadoop.hdfs.protocol.LocatedBlock updateBlockForPipeline(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String clientName) throws IOException
      Specified by:
      updateBlockForPipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • updatePipeline

      public void updatePipeline(String clientName, org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, org.apache.hadoop.hdfs.protocol.ExtendedBlock newBlock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newNodes, String[] newStorageIDs) throws IOException
      Specified by:
      updatePipeline in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • commitBlockSynchronization

      public void commitBlockSynchronization(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, org.apache.hadoop.hdfs.protocol.DatanodeID[] newtargets, String[] newtargetstorages) throws IOException
      Description copied from interface: DatanodeProtocol
      Commit block synchronization in lease recovery
      Specified by:
      commitBlockSynchronization in interface DatanodeProtocol
      Throws:
      IOException
    • getPreferredBlockSize

      public long getPreferredBlockSize(String filename) throws IOException
      Specified by:
      getPreferredBlockSize in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename

      @Deprecated public boolean rename(String src, String dst) throws IOException
      Deprecated.
      Specified by:
      rename in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • concat

      public void concat(String trg, String[] src) throws IOException
      Specified by:
      concat in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rename2

      public void rename2(String src, String dst, org.apache.hadoop.fs.Options.Rename... options) throws IOException
      Specified by:
      rename2 in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • truncate

      public boolean truncate(String src, long newLength, String clientName) throws IOException
      Specified by:
      truncate in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • delete

      public boolean delete(String src, boolean recursive) throws IOException
      Specified by:
      delete in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • mkdirs

      public boolean mkdirs(String src, org.apache.hadoop.fs.permission.FsPermission masked, boolean createParent) throws IOException
      Specified by:
      mkdirs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renewLease

      public void renewLease(String clientName, List<String> namespaces) throws IOException
      Specified by:
      renewLease in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getListing

      public org.apache.hadoop.hdfs.protocol.DirectoryListing getListing(String src, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getBatchedListing

      public org.apache.hadoop.hdfs.protocol.BatchedDirectoryListing getBatchedListing(String[] srcs, byte[] startAfter, boolean needLocation) throws IOException
      Specified by:
      getBatchedListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileInfo(String src) throws IOException
      Specified by:
      getFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLocatedFileInfo

      public org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus getLocatedFileInfo(String src, boolean needBlockToken) throws IOException
      Specified by:
      getLocatedFileInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • isFileClosed

      public boolean isFileClosed(String src) throws IOException
      Specified by:
      isFileClosed in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getFileLinkInfo

      public org.apache.hadoop.hdfs.protocol.HdfsFileStatus getFileLinkInfo(String src) throws IOException
      Specified by:
      getFileLinkInfo in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getStats

      public long[] getStats() throws IOException
      Specified by:
      getStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getReplicatedBlockStats

      public org.apache.hadoop.hdfs.protocol.ReplicatedBlockStats getReplicatedBlockStats() throws IOException
      Specified by:
      getReplicatedBlockStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECBlockGroupStats

      public org.apache.hadoop.hdfs.protocol.ECBlockGroupStats getECBlockGroupStats() throws IOException
      Specified by:
      getECBlockGroupStats in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getDatanodeReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getDatanodeStorageReport

      public org.apache.hadoop.hdfs.server.protocol.DatanodeStorageReport[] getDatanodeStorageReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) throws IOException
      Specified by:
      getDatanodeStorageReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setSafeMode

      public boolean setSafeMode(org.apache.hadoop.hdfs.protocol.HdfsConstants.SafeModeAction action, boolean isChecked) throws IOException
      Specified by:
      setSafeMode in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • restoreFailedStorage

      public boolean restoreFailedStorage(String arg) throws IOException
      Specified by:
      restoreFailedStorage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • saveNamespace

      public boolean saveNamespace(long timeWindow, long txGap) throws IOException
      Specified by:
      saveNamespace in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollEdits

      public long rollEdits() throws org.apache.hadoop.security.AccessControlException, IOException
      Specified by:
      rollEdits in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      org.apache.hadoop.security.AccessControlException
      IOException
    • refreshNodes

      public void refreshNodes() throws IOException
      Specified by:
      refreshNodes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getTransactionID

      public long getTransactionID() throws IOException
      Specified by:
      getTransactionID in interface NamenodeProtocol
      Returns:
      The most recent transaction ID that has been synced to persistent storage, or applied from persistent storage in the case of a non-active node.
      Throws:
      IOException
    • getMostRecentCheckpointTxId

      public long getMostRecentCheckpointTxId() throws IOException
      Description copied from interface: NamenodeProtocol
      Get the transaction ID of the most recent checkpoint.
      Specified by:
      getMostRecentCheckpointTxId in interface NamenodeProtocol
      Throws:
      IOException
    • getMostRecentNameNodeFileTxId

      public long getMostRecentNameNodeFileTxId(NNStorage.NameNodeFile nnf) throws IOException
      Description copied from interface: NamenodeProtocol
      Get the transaction ID of the most recent checkpoint for the given NameNodeFile.
      Specified by:
      getMostRecentNameNodeFileTxId in interface NamenodeProtocol
      Throws:
      IOException
    • rollEditLog

      public CheckpointSignature rollEditLog() throws IOException
      Description copied from interface: NamenodeProtocol
      Closes the current edit log and opens a new one. The call fails if the file system is in SafeMode.
      Specified by:
      rollEditLog in interface NamenodeProtocol
      Returns:
      a unique token to identify this transaction.
      Throws:
      IOException
    • getEditLogManifest

      public RemoteEditLogManifest getEditLogManifest(long sinceTxId) throws IOException
      Description copied from interface: NamenodeProtocol
      Return a structure containing details about all edit logs available to be fetched from the NameNode.
      Specified by:
      getEditLogManifest in interface NamenodeProtocol
      Parameters:
      sinceTxId - return only logs that contain transactions >= sinceTxId
      Throws:
      IOException
    • isUpgradeFinalized

      public boolean isUpgradeFinalized() throws IOException
      Specified by:
      isUpgradeFinalized in interface NamenodeProtocol
      Returns:
      Whether the NameNode is in upgrade state (false) or not (true)
      Throws:
      IOException
    • isRollingUpgrade

      public boolean isRollingUpgrade() throws IOException
      Description copied from interface: NamenodeProtocol
      return whether the Namenode is rolling upgrade in progress (true) or not (false).
      Specified by:
      isRollingUpgrade in interface NamenodeProtocol
      Returns:
      Throws:
      IOException
    • finalizeUpgrade

      public void finalizeUpgrade() throws IOException
      Specified by:
      finalizeUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • upgradeStatus

      public boolean upgradeStatus() throws IOException
      Specified by:
      upgradeStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • rollingUpgrade

      public org.apache.hadoop.hdfs.protocol.RollingUpgradeInfo rollingUpgrade(org.apache.hadoop.hdfs.protocol.HdfsConstants.RollingUpgradeAction action) throws IOException
      Specified by:
      rollingUpgrade in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • metaSave

      public void metaSave(String filename) throws IOException
      Specified by:
      metaSave in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      @Deprecated public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId) throws IOException
      Deprecated.
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listOpenFiles

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.OpenFileEntry> listOpenFiles(long prevId, EnumSet<org.apache.hadoop.hdfs.protocol.OpenFilesIterator.OpenFilesType> openFilesTypes, String path) throws IOException
      Specified by:
      listOpenFiles in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • msync

      public void msync() throws IOException
      Specified by:
      msync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getHAServiceState

      public org.apache.hadoop.ha.HAServiceProtocol.HAServiceState getHAServiceState() throws IOException
      Specified by:
      getHAServiceState in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCorruptFileBlocks

      public org.apache.hadoop.hdfs.protocol.CorruptFileBlocks listCorruptFileBlocks(String path, String cookie) throws IOException
      Specified by:
      listCorruptFileBlocks in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth) throws IOException
      Tell all datanodes to use a new, non-persistent bandwidth value for dfs.datanode.balance.bandwidthPerSec.
      Specified by:
      setBalancerBandwidth in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Parameters:
      bandwidth - Balancer bandwidth in bytes per second for all datanodes.
      Throws:
      IOException
    • getContentSummary

      public org.apache.hadoop.fs.ContentSummary getContentSummary(String path) throws IOException
      Specified by:
      getContentSummary in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getQuotaUsage

      public org.apache.hadoop.fs.QuotaUsage getQuotaUsage(String path) throws IOException
      Specified by:
      getQuotaUsage in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • satisfyStoragePolicy

      public void satisfyStoragePolicy(String src) throws IOException
      Specified by:
      satisfyStoragePolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSlowDatanodeReport

      public org.apache.hadoop.hdfs.protocol.DatanodeInfo[] getSlowDatanodeReport() throws IOException
      Specified by:
      getSlowDatanodeReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setQuota

      public void setQuota(String path, long namespaceQuota, long storagespaceQuota, org.apache.hadoop.fs.StorageType type) throws IOException
      Specified by:
      setQuota in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • fsync

      public void fsync(String src, long fileId, String clientName, long lastBlockLength) throws IOException
      Specified by:
      fsync in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setTimes

      public void setTimes(String src, long mtime, long atime) throws IOException
      Specified by:
      setTimes in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSymlink

      public void createSymlink(String target, String link, org.apache.hadoop.fs.permission.FsPermission dirPerms, boolean createParent) throws IOException
      Specified by:
      createSymlink in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getLinkTarget

      public String getLinkTarget(String path) throws IOException
      Specified by:
      getLinkTarget in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • registerDatanode

      public DatanodeRegistration registerDatanode(DatanodeRegistration nodeReg) throws IOException
      Description copied from interface: DatanodeProtocol
      Register Datanode.
      Specified by:
      registerDatanode in interface DatanodeProtocol
      Parameters:
      nodeReg - datanode registration information
      Returns:
      the given DatanodeRegistration with updated registration information
      Throws:
      IOException
      See Also:
      • FSNamesystem.registerDatanode(DatanodeRegistration)
    • sendHeartbeat

      public HeartbeatResponse sendHeartbeat(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary, boolean requestFullBlockReportLease, @Nonnull org.apache.hadoop.hdfs.server.protocol.SlowPeerReports slowPeers, @Nonnull org.apache.hadoop.hdfs.server.protocol.SlowDiskReports slowDisks) throws IOException
      Description copied from interface: DatanodeProtocol
      sendHeartbeat() tells the NameNode that the DataNode is still alive and well. Includes some status info, too. It also gives the NameNode a chance to return an array of "DatanodeCommand" objects in HeartbeatResponse. A DatanodeCommand tells the DataNode to invalidate local block(s), or to copy them to other DataNodes, etc.
      Specified by:
      sendHeartbeat in interface DatanodeProtocol
      Parameters:
      nodeReg - datanode registration information.
      report - utilization report per storage.
      dnCacheCapacity - the total cache capacity of the datanode (in bytes).
      dnCacheUsed - the amount of cache used by the datanode (in bytes).
      xmitsInProgress - number of transfers from this datanode to others.
      xceiverCount - number of active transceiver threads.
      failedVolumes - number of failed volumes.
      volumeFailureSummary - info about volume failures.
      requestFullBlockReportLease - whether to request a full block report lease.
      slowPeers - Details of peer DataNodes that were detected as being slow to respond to packet writes. Empty report if no slow peers were detected by the DataNode.
      slowDisks - Details of disks on DataNodes that were detected as being slow. Empty report if no slow disks were detected.
      Throws:
      IOException - on error.
    • blockReport

      public DatanodeCommand blockReport(DatanodeRegistration nodeReg, String poolId, StorageBlockReport[] reports, BlockReportContext context) throws IOException
      Description copied from interface: DatanodeProtocol
      blockReport() tells the NameNode about all the locally-stored blocks. The NameNode returns an array of Blocks that have become obsolete and should be deleted. This function is meant to upload *all* the locally-stored blocks. It's invoked upon startup and then infrequently afterwards.
      Specified by:
      blockReport in interface DatanodeProtocol
      Parameters:
      nodeReg - datanode registration
      poolId - the block pool ID for the blocks
      reports - report of blocks per storage Each finalized block is represented as 3 longs. Each under- construction replica is represented as 4 longs. This is done instead of Block[] to reduce memory used by block reports.
      context - Context information for this block report.
      Returns:
      - the next command for DN to process.
      Throws:
      IOException
    • cacheReport

      public DatanodeCommand cacheReport(DatanodeRegistration nodeReg, String poolId, List<Long> blockIds) throws IOException
      Description copied from interface: DatanodeProtocol
      Communicates the complete list of locally cached blocks to the NameNode. This method is similar to DatanodeProtocol.blockReport(DatanodeRegistration, String, StorageBlockReport[], BlockReportContext), which is used to communicated blocks stored on disk.
      Specified by:
      cacheReport in interface DatanodeProtocol
      Parameters:
      nodeReg - The datanode registration.
      poolId - The block pool ID for the blocks.
      blockIds - A list of block IDs.
      Returns:
      The DatanodeCommand.
      Throws:
      IOException
    • blockReceivedAndDeleted

      public void blockReceivedAndDeleted(DatanodeRegistration nodeReg, String poolId, StorageReceivedDeletedBlocks[] receivedAndDeletedBlocks) throws IOException
      Description copied from interface: DatanodeProtocol
      blockReceivedAndDeleted() allows the DataNode to tell the NameNode about recently-received and -deleted block data. For the case of received blocks, a hint for preferred replica to be deleted when there is any excessive blocks is provided. For example, whenever client code writes a new Block here, or another DataNode copies a Block to this DataNode, it will call blockReceived().
      Specified by:
      blockReceivedAndDeleted in interface DatanodeProtocol
      Throws:
      IOException
    • errorReport

      public void errorReport(DatanodeRegistration nodeReg, int errorCode, String msg) throws IOException
      Description copied from interface: DatanodeProtocol
      errorReport() tells the NameNode about something that has gone awry. Useful for debugging.
      Specified by:
      errorReport in interface DatanodeProtocol
      Throws:
      IOException
    • versionRequest

      public NamespaceInfo versionRequest() throws IOException
      Description copied from interface: NamenodeProtocol
      Request name-node version and storage information.
      Specified by:
      versionRequest in interface DatanodeProtocol
      Specified by:
      versionRequest in interface NamenodeProtocol
      Returns:
      NamespaceInfo identifying versions and storage information of the name-node
      Throws:
      IOException
    • sendLifeline

      public void sendLifeline(DatanodeRegistration nodeReg, org.apache.hadoop.hdfs.server.protocol.StorageReport[] report, long dnCacheCapacity, long dnCacheUsed, int xmitsInProgress, int xceiverCount, int failedVolumes, VolumeFailureSummary volumeFailureSummary) throws IOException
      Specified by:
      sendLifeline in interface DatanodeLifelineProtocol
      Throws:
      IOException
    • refreshServiceAcl

      public void refreshServiceAcl() throws IOException
      Specified by:
      refreshServiceAcl in interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
      Throws:
      IOException
    • refreshUserToGroupsMappings

      public void refreshUserToGroupsMappings() throws IOException
      Specified by:
      refreshUserToGroupsMappings in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
      Throws:
      IOException
    • refreshSuperUserGroupsConfiguration

      public void refreshSuperUserGroupsConfiguration() throws IOException
      Specified by:
      refreshSuperUserGroupsConfiguration in interface org.apache.hadoop.security.RefreshUserMappingsProtocol
      Throws:
      IOException
    • refreshCallQueue

      public void refreshCallQueue() throws IOException
      Specified by:
      refreshCallQueue in interface org.apache.hadoop.ipc.RefreshCallQueueProtocol
      Throws:
      IOException
    • refresh

      public Collection<org.apache.hadoop.ipc.RefreshResponse> refresh(String identifier, String[] args)
      Specified by:
      refresh in interface org.apache.hadoop.ipc.GenericRefreshProtocol
    • getGroupsForUser

      public String[] getGroupsForUser(String user) throws IOException
      Specified by:
      getGroupsForUser in interface org.apache.hadoop.tools.GetUserMappingsProtocol
      Throws:
      IOException
    • monitorHealth

      public void monitorHealth() throws org.apache.hadoop.ha.HealthCheckFailedException, org.apache.hadoop.security.AccessControlException, IOException
      Specified by:
      monitorHealth in interface org.apache.hadoop.ha.HAServiceProtocol
      Throws:
      org.apache.hadoop.ha.HealthCheckFailedException
      org.apache.hadoop.security.AccessControlException
      IOException
    • transitionToActive

      public void transitionToActive(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException
      Specified by:
      transitionToActive in interface org.apache.hadoop.ha.HAServiceProtocol
      Throws:
      org.apache.hadoop.ha.ServiceFailedException
      org.apache.hadoop.security.AccessControlException
      IOException
    • transitionToStandby

      public void transitionToStandby(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException
      Specified by:
      transitionToStandby in interface org.apache.hadoop.ha.HAServiceProtocol
      Throws:
      org.apache.hadoop.ha.ServiceFailedException
      org.apache.hadoop.security.AccessControlException
      IOException
    • transitionToObserver

      public void transitionToObserver(org.apache.hadoop.ha.HAServiceProtocol.StateChangeRequestInfo req) throws org.apache.hadoop.ha.ServiceFailedException, org.apache.hadoop.security.AccessControlException, IOException
      Specified by:
      transitionToObserver in interface org.apache.hadoop.ha.HAServiceProtocol
      Throws:
      org.apache.hadoop.ha.ServiceFailedException
      org.apache.hadoop.security.AccessControlException
      IOException
    • getServiceStatus

      public org.apache.hadoop.ha.HAServiceStatus getServiceStatus() throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.ha.ServiceFailedException, IOException
      Specified by:
      getServiceStatus in interface org.apache.hadoop.ha.HAServiceProtocol
      Throws:
      org.apache.hadoop.security.AccessControlException
      org.apache.hadoop.ha.ServiceFailedException
      IOException
    • getDataEncryptionKey

      public org.apache.hadoop.hdfs.security.token.block.DataEncryptionKey getDataEncryptionKey() throws IOException
      Specified by:
      getDataEncryptionKey in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createSnapshot

      public String createSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      createSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • deleteSnapshot

      public void deleteSnapshot(String snapshotRoot, String snapshotName) throws IOException
      Specified by:
      deleteSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • allowSnapshot

      public void allowSnapshot(String snapshotRoot) throws IOException
      Specified by:
      allowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disallowSnapshot

      public void disallowSnapshot(String snapshot) throws IOException
      Specified by:
      disallowSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • renameSnapshot

      public void renameSnapshot(String snapshotRoot, String snapshotOldName, String snapshotNewName) throws IOException
      Specified by:
      renameSnapshot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshottableDirListing

      public org.apache.hadoop.hdfs.protocol.SnapshottableDirectoryStatus[] getSnapshottableDirListing() throws IOException
      Specified by:
      getSnapshottableDirListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotListing

      public org.apache.hadoop.hdfs.protocol.SnapshotStatus[] getSnapshotListing(String snapshotRoot) throws IOException
      Specified by:
      getSnapshotListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReport

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReport getSnapshotDiffReport(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName) throws IOException
      Specified by:
      getSnapshotDiffReport in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getSnapshotDiffReportListing

      public org.apache.hadoop.hdfs.protocol.SnapshotDiffReportListing getSnapshotDiffReportListing(String snapshotRoot, String earlierSnapshotName, String laterSnapshotName, byte[] startPath, int index) throws IOException
      Specified by:
      getSnapshotDiffReportListing in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCacheDirective

      public long addCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo path, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      addCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCacheDirective

      public void modifyCacheDirective(org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo directive, EnumSet<org.apache.hadoop.fs.CacheFlag> flags) throws IOException
      Specified by:
      modifyCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCacheDirective

      public void removeCacheDirective(long id) throws IOException
      Specified by:
      removeCacheDirective in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCacheDirectives

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CacheDirectiveEntry> listCacheDirectives(long prevId, org.apache.hadoop.hdfs.protocol.CacheDirectiveInfo filter) throws IOException
      Specified by:
      listCacheDirectives in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addCachePool

      public void addCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      addCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyCachePool

      public void modifyCachePool(org.apache.hadoop.hdfs.protocol.CachePoolInfo info) throws IOException
      Specified by:
      modifyCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeCachePool

      public void removeCachePool(String cachePoolName) throws IOException
      Specified by:
      removeCachePool in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listCachePools

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.CachePoolEntry> listCachePools(String prevKey) throws IOException
      Specified by:
      listCachePools in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • modifyAclEntries

      public void modifyAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      modifyAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAclEntries

      public void removeAclEntries(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      removeAclEntries in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeDefaultAcl

      public void removeDefaultAcl(String src) throws IOException
      Specified by:
      removeDefaultAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeAcl

      public void removeAcl(String src) throws IOException
      Specified by:
      removeAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setAcl

      public void setAcl(String src, List<org.apache.hadoop.fs.permission.AclEntry> aclSpec) throws IOException
      Specified by:
      setAcl in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getAclStatus

      public org.apache.hadoop.fs.permission.AclStatus getAclStatus(String src) throws IOException
      Specified by:
      getAclStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • createEncryptionZone

      public void createEncryptionZone(String src, String keyName) throws IOException
      Specified by:
      createEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEZForPath

      public org.apache.hadoop.hdfs.protocol.EncryptionZone getEZForPath(String src) throws IOException
      Specified by:
      getEZForPath in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listEncryptionZones

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.EncryptionZone> listEncryptionZones(long prevId) throws IOException
      Specified by:
      listEncryptionZones in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • reencryptEncryptionZone

      public void reencryptEncryptionZone(String zone, org.apache.hadoop.hdfs.protocol.HdfsConstants.ReencryptAction action) throws IOException
      Specified by:
      reencryptEncryptionZone in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listReencryptionStatus

      public org.apache.hadoop.fs.BatchedRemoteIterator.BatchedEntries<org.apache.hadoop.hdfs.protocol.ZoneReencryptionStatus> listReencryptionStatus(long prevId) throws IOException
      Specified by:
      listReencryptionStatus in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setErasureCodingPolicy

      public void setErasureCodingPolicy(String src, String ecPolicyName) throws IOException
      Specified by:
      setErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • setXAttr

      public void setXAttr(String src, org.apache.hadoop.fs.XAttr xAttr, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException
      Specified by:
      setXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getXAttrs

      public List<org.apache.hadoop.fs.XAttr> getXAttrs(String src, List<org.apache.hadoop.fs.XAttr> xAttrs) throws IOException
      Specified by:
      getXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • listXAttrs

      public List<org.apache.hadoop.fs.XAttr> listXAttrs(String src) throws IOException
      Specified by:
      listXAttrs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeXAttr

      public void removeXAttr(String src, org.apache.hadoop.fs.XAttr xAttr) throws IOException
      Specified by:
      removeXAttr in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • checkAccess

      public void checkAccess(String path, org.apache.hadoop.fs.permission.FsAction mode) throws IOException
      Specified by:
      checkAccess in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getCurrentEditLogTxid

      public long getCurrentEditLogTxid() throws IOException
      Specified by:
      getCurrentEditLogTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getEditsFromTxid

      public org.apache.hadoop.hdfs.inotify.EventBatchList getEditsFromTxid(long txid) throws IOException
      Specified by:
      getEditsFromTxid in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicyInfo[] getErasureCodingPolicies() throws IOException
      Specified by:
      getErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingCodecs

      public Map<String,String> getErasureCodingCodecs() throws IOException
      Specified by:
      getErasureCodingCodecs in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getErasureCodingPolicy

      public org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy getErasureCodingPolicy(String src) throws IOException
      Specified by:
      getErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • unsetErasureCodingPolicy

      public void unsetErasureCodingPolicy(String src) throws IOException
      Specified by:
      unsetErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • getECTopologyResultForPolicies

      public org.apache.hadoop.hdfs.protocol.ECTopologyVerifierResult getECTopologyResultForPolicies(String... policyNames) throws IOException
      Specified by:
      getECTopologyResultForPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • addErasureCodingPolicies

      public org.apache.hadoop.hdfs.protocol.AddErasureCodingPolicyResponse[] addErasureCodingPolicies(org.apache.hadoop.hdfs.protocol.ErasureCodingPolicy[] policies) throws IOException
      Specified by:
      addErasureCodingPolicies in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • removeErasureCodingPolicy

      public void removeErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      removeErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • enableErasureCodingPolicy

      public void enableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      enableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • disableErasureCodingPolicy

      public void disableErasureCodingPolicy(String ecPolicyName) throws IOException
      Specified by:
      disableErasureCodingPolicy in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException
    • startReconfiguration

      public void startReconfiguration() throws IOException
      Specified by:
      startReconfiguration in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • getReconfigurationStatus

      public org.apache.hadoop.conf.ReconfigurationTaskStatus getReconfigurationStatus() throws IOException
      Specified by:
      getReconfigurationStatus in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • listReconfigurableProperties

      public List<String> listReconfigurableProperties() throws IOException
      Specified by:
      listReconfigurableProperties in interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
      Throws:
      IOException
    • getNextSPSPath

      public Long getNextSPSPath() throws IOException
      Specified by:
      getNextSPSPath in interface NamenodeProtocol
      Returns:
      Gets the next available sps path, otherwise null. This API used by External SPS.
      Throws:
      IOException
    • getEnclosingRoot

      public org.apache.hadoop.fs.Path getEnclosingRoot(String src) throws IOException
      Specified by:
      getEnclosingRoot in interface org.apache.hadoop.hdfs.protocol.ClientProtocol
      Throws:
      IOException