Class DatanodeDescriptor

java.lang.Object
org.apache.hadoop.hdfs.protocol.DatanodeID
org.apache.hadoop.hdfs.protocol.DatanodeInfo
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor
All Implemented Interfaces:
Comparable<org.apache.hadoop.hdfs.protocol.DatanodeID>, org.apache.hadoop.net.Node
Direct Known Subclasses:
ProvidedStorageMap.ProvidedDescriptor

@Private @Evolving public class DatanodeDescriptor extends org.apache.hadoop.hdfs.protocol.DatanodeInfo
This class extends the DatanodeInfo class with ephemeral information (eg health, capacity, what blocks are associated with the Datanode) that is private to the Namenode, ie this class is not exposed to clients.
  • Field Details

  • Constructor Details

    • DatanodeDescriptor

      public DatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID)
      DatanodeDescriptor constructor
      Parameters:
      nodeID - id of the data node
    • DatanodeDescriptor

      public DatanodeDescriptor(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID, String networkLocation)
      DatanodeDescriptor constructor
      Parameters:
      nodeID - id of the data node
      networkLocation - location of the data node in network
  • Method Details

    • getPendingCached

      public DatanodeDescriptor.CachedBlocksList getPendingCached()
    • getCached

    • getPendingUncached

      public DatanodeDescriptor.CachedBlocksList getPendingUncached()
    • isAlive

      public boolean isAlive()
    • setAlive

      public void setAlive(boolean isAlive)
    • needKeyUpdate

      public boolean needKeyUpdate()
    • setNeedKeyUpdate

      public void setNeedKeyUpdate(boolean needKeyUpdate)
    • getLeavingServiceStatus

      public DatanodeDescriptor.LeavingServiceStatus getLeavingServiceStatus()
    • isHeartbeatedSinceRegistration

      @VisibleForTesting public boolean isHeartbeatedSinceRegistration()
    • getStorageInfo

      @VisibleForTesting public DatanodeStorageInfo getStorageInfo(String storageID)
    • getStorageInfos

      @VisibleForTesting public DatanodeStorageInfo[] getStorageInfos()
    • getStorageTypes

      public EnumSet<org.apache.hadoop.fs.StorageType> getStorageTypes()
    • getStorageReports

      public org.apache.hadoop.hdfs.server.protocol.StorageReport[] getStorageReports()
    • resetBlocks

      public void resetBlocks()
    • clearBlockQueues

      public void clearBlockQueues()
    • numBlocks

      public int numBlocks()
    • incrementPendingReplicationWithoutTargets

      @VisibleForTesting public void incrementPendingReplicationWithoutTargets()
    • decrementPendingReplicationWithoutTargets

      @VisibleForTesting public void decrementPendingReplicationWithoutTargets()
    • addBlockToBeReplicated

      @VisibleForTesting public void addBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets)
      Store block replication work.
    • addECBlockToBeReplicated

      @VisibleForTesting public void addECBlockToBeReplicated(org.apache.hadoop.hdfs.protocol.Block block, DatanodeStorageInfo[] targets)
      Store ec block to be replicated work.
    • getNumberOfBlocksToBeErasureCoded

      @VisibleForTesting public int getNumberOfBlocksToBeErasureCoded()
      The number of work items that are pending to be reconstructed.
    • getNumberOfECBlocksToBeReplicated

      @VisibleForTesting public int getNumberOfECBlocksToBeReplicated()
      The number of ec work items that are pending to be replicated.
    • getNumberOfReplicateBlocks

      @VisibleForTesting public int getNumberOfReplicateBlocks()
    • getErasureCodeCommand

      public List<BlockECReconstructionCommand.BlockECReconstructionInfo> getErasureCodeCommand(int maxTransfers)
    • getLeaseRecoveryCommand

      public BlockInfo[] getLeaseRecoveryCommand(int maxTransfers)
    • getInvalidateBlocks

      public org.apache.hadoop.hdfs.protocol.Block[] getInvalidateBlocks(int maxblocks)
      Remove the specified number of blocks to be invalidated
    • containsInvalidateBlock

      @VisibleForTesting public boolean containsInvalidateBlock(org.apache.hadoop.hdfs.protocol.Block block)
    • chooseStorage4Block

      public DatanodeStorageInfo chooseStorage4Block(org.apache.hadoop.fs.StorageType t, long blockSize, int minBlocksForWrite)
      Find whether the datanode contains good storage of given type to place block of size blockSize.

      Currently datanode only cares about the storage type, in this method, the first storage of given type we see is returned.

      Parameters:
      t - requested storage type
      blockSize - requested block size
      minBlocksForWrite - requested the minimum number of blocks
    • getBlocksScheduled

      public int getBlocksScheduled(org.apache.hadoop.fs.StorageType t)
      Returns:
      Approximate number of blocks currently scheduled to be written to the given storage type of this datanode.
    • getBlocksScheduled

      public int getBlocksScheduled()
      Returns:
      Approximate number of blocks currently scheduled to be written to this datanode.
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class org.apache.hadoop.hdfs.protocol.DatanodeInfo
    • equals

      public boolean equals(Object obj)
      Overrides:
      equals in class org.apache.hadoop.hdfs.protocol.DatanodeInfo
    • setDisallowed

      public void setDisallowed(boolean flag)
      Set the flag to indicate if this datanode is disallowed from communicating with the namenode.
    • isDisallowed

      public boolean isDisallowed()
      Is the datanode disallowed from communicating with the namenode?
    • getVolumeFailures

      public int getVolumeFailures()
      Returns:
      number of failed volumes in the datanode.
    • getVolumeFailureSummary

      public VolumeFailureSummary getVolumeFailureSummary()
      Returns info about volume failures.
      Returns:
      info about volume failures, possibly null
    • getNumVolumesAvailable

      public int getNumVolumesAvailable()
      Return the number of volumes that can be written.
      Returns:
      the number of volumes that can be written.
    • updateRegInfo

      public void updateRegInfo(org.apache.hadoop.hdfs.protocol.DatanodeID nodeReg)
      Overrides:
      updateRegInfo in class org.apache.hadoop.hdfs.protocol.DatanodeID
      Parameters:
      nodeReg - DatanodeID to update registration for.
    • getBalancerBandwidth

      public long getBalancerBandwidth()
      Returns:
      balancer bandwidth in bytes per second for this datanode
    • setBalancerBandwidth

      public void setBalancerBandwidth(long bandwidth)
      Parameters:
      bandwidth - balancer bandwidth in bytes per second for this datanode
    • dumpDatanode

      public String dumpDatanode()
      Overrides:
      dumpDatanode in class org.apache.hadoop.hdfs.protocol.DatanodeInfo
    • getLastCachingDirectiveSentTimeMs

      public long getLastCachingDirectiveSentTimeMs()
      Returns:
      The time at which we last sent caching directives to this DataNode, in monotonic milliseconds.
    • setLastCachingDirectiveSentTimeMs

      public void setLastCachingDirectiveSentTimeMs(long time)
      Parameters:
      time - The time at which we last sent caching directives to this DataNode, in monotonic milliseconds.
    • checkBlockReportReceived

      public boolean checkBlockReportReceived()
      Returns:
      whether at least first block report has been received
    • setForceRegistration

      public void setForceRegistration(boolean force)
    • isRegistered

      public boolean isRegistered()
    • hasStorageType

      public boolean hasStorageType(org.apache.hadoop.fs.StorageType type)