Class DataNode
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.conf.ReconfigurableBase
org.apache.hadoop.hdfs.server.datanode.DataNode
- All Implemented Interfaces:
org.apache.hadoop.conf.Configurable,org.apache.hadoop.conf.Reconfigurable,org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol,org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol,DataNodeMXBean,InterDatanodeProtocol
@Private
public class DataNode
extends org.apache.hadoop.conf.ReconfigurableBase
implements InterDatanodeProtocol, org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol, DataNodeMXBean, org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
DataNode is a class (and program) that stores a set of
blocks for a DFS deployment. A single deployment can
have one or many DataNodes. Each DataNode communicates
regularly with a single NameNode. It also communicates
with client code and other DataNodes from time to time.
DataNodes store a series of named blocks. The DataNode
allows client code to read these blocks, or to write new
block data. The DataNode may also, in response to instructions
from its NameNode, delete blocks or copy blocks to/from other
DataNodes.
The DataNode maintains just one critical table:
block-> stream of bytes (of BLOCK_SIZE or less)
This info is stored on a local disk. The DataNode
reports the table's contents to the NameNode upon startup
and every so often afterwards.
DataNodes spend their lives in an endless loop of asking
the NameNode for something to do. A NameNode cannot connect
to a DataNode directly; a NameNode simply returns values from
functions invoked by a DataNode.
DataNodes maintain an open server socket so that client code
or other DataNodes can read/write data. The host/port for
this server is reported to the NameNode, which then sends that
information to clients or other DataNodes that might be interested.
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classstatic class -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final Stringorg.apache.hadoop.ipc.RPC.Serverstatic final org.slf4j.Loggerstatic final intstatic final Stringstatic final StringFields inherited from interface org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
versionIDFields inherited from interface org.apache.hadoop.hdfs.server.protocol.InterDatanodeProtocol
versionIDFields inherited from interface org.apache.hadoop.hdfs.protocol.ReconfigurationProtocol
VERSIONID -
Method Summary
Modifier and TypeMethodDescriptionvoidcancelDiskBalancePlan(String planID) Cancels a running plan.voidCheck the disk error synchronously.voidcheckDiskErrorAsync(FsVolumeSpi volume) Check if there is a disk failure asynchronously and if so, handle the error.voidstatic DataNodecreateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) Instantiate & Start a single datanode daemon and wait for it to finish.static DataNodecreateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) Instantiate & Start a single datanode daemon and wait for it to finish.static InterDatanodeProtocolcreateInterDataNodeProtocolProxy(org.apache.hadoop.hdfs.protocol.DatanodeID datanodeid, org.apache.hadoop.conf.Configuration conf, int socketTimeout, boolean connectToDnViaHostname) static InetSocketAddresscreateSocketAddr(String target) Deprecated.voidDecrements the xmitsInProgress countvoiddecrementXmitsInProgress(int delta) Decrements the xmitsInProgress count by given value.voiddeleteBlockPool(String blockPoolId, boolean force) voidstatic StringintReturns the number of Datanode threads actively transferring blocks.longorg.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier>getBlockAccessToken(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, EnumSet<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.AccessMode> mode, org.apache.hadoop.fs.StorageType[] storageTypes, String[] storageIds) Use BlockTokenSecretManager to generate block token for current user.org.apache.hadoop.hdfs.protocol.BlockLocalPathInfogetBlockLocalPathInfo(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier> token) org.apache.hadoop.hdfs.server.datanode.BlockPoolManagerintReturned information is a JSON representation of an array, each element of the array is a map contains the information about a block pool service actor.Gets the cluster id.org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactorygetDataEncryptionKeyFactoryForBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) Returns a new DataEncryptionKeyFactory that generates a key from the BlockPoolTokenSecretManager, using the block pool ID of the given block.Return hostname of the datanode.org.apache.hadoop.hdfs.protocol.DatanodeIDorg.apache.hadoop.hdfs.protocol.DatanodeLocalInfoGets the network error counts on a per-Datanode basis.Gets the data port.Gets a runtime configuration value from diskbalancer instance.Gets the diskBalancer Status.getDNRegistrationForBP(String bpid) get BP registration by blockPool idlongGet the start time of the DataNode.org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.ECNgetECN()The ECN bit for the DataNode.FsDatasetSpi<?>Examples are adding and deleting blocks directly.Gets the http port.static InetSocketAddressgetInfoAddr(org.apache.hadoop.conf.Configuration conf) Determine the http server's effective addrintintintlonglongReturned information is a JSON representation of a map with name node host name as the key and block pool Id as the value.protected org.apache.hadoop.conf.ConfigurationlonggetOOBTimeout(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status status) Get the timeout to be used for transmitting the OOB typeGet a list of the keys of the re-configurable properties in configuration.org.apache.hadoop.conf.ReconfigurationTaskStatuslonggetReplicaVisibleLength(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) Gets the rpc port.org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClientGets the average info (e.g. time) of SendPacketDownstream when the DataNode acts as the penultimate (2nd to the last) node in pipeline.org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.SLOWgetSLOWByBlockPoolId(String bpId) The SLOW bit for the DataNode of the specific BlockPool.Gets the slow disks in the Datanode.Get the version of software running on the DataNodestatic List<StorageLocation>getStorageLocations(org.apache.hadoop.conf.Configuration conf) org.apache.hadoop.tracing.TracerGets the version of Hadoop.Returned information is a JSON representation of a map with volume name as the key and value is a map of volume attribute keys to its valuesList<org.apache.hadoop.hdfs.protocol.DatanodeVolumeInfo>intNumber of concurrent xceivers per node.NB: The datanode can perform data transfer on the streaming address however clients are given the IPC IP address for data transfer, and that may be a different address.intorg.apache.hadoop.hdfs.server.datanode.DataXceiverServerintReturns an estimate of the number of data replication/reconstruction tasks running currently.voidhandleVolumeFailures(Set<FsVolumeSpi> unhealthyVolumes) voidincrementXmitsInProcess(int delta) Increments the xmitInProgress count by given value.voidIncrements the xmitsInProgress count. xmitsInProgress count represents the number of data replication/reconstruction tasks running currently.Initialize a replica recovery.static DataNodeinstantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) Instantiate a single datanode object.static DataNodeinstantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) Instantiate a single datanode object, along with its secure resources.booleanisBPServiceAlive(String bpid) booleanbooleanA datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized.booleanisDatanodeFullyStarted(boolean checkConnectionToActiveNamenode) A datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized.booleanA data node is considered to be up if one of the bp services is upbooleanGets if security is enabled.static voidCreates either NIO or regular depending on socketWriteTimeout.voidnotifyNamenodeDeletedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid) Notify the corresponding namenode to delete the block.voidnotifyNamenodeReceivedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String delHint, String storageUuid, boolean isOnTransientStorage) protected voidnotifyNamenodeReceivingBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid) org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkStatusReturns the status of current or last executed work plan.reconfigurePropertyImpl(String property, String newVal) .voidvoidrefreshNamenodes(org.apache.hadoop.conf.Configuration conf) voidreportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) Report a bad block which is hosted on the local DN.voidreportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi volume) Report a bad block which is hosted on the local DN.voidreportCorruptedBlocks(org.apache.hadoop.hdfs.DFSUtilClient.CorruptedBlocks corruptedBlocks) voidreportRemoteBadBlock(org.apache.hadoop.hdfs.protocol.DatanodeInfo srcDataNode, org.apache.hadoop.hdfs.protocol.ExtendedBlock block) Report a bad block on another DN (eg if we received a corrupt replica from a remote host).voidStart a single datanode daemon and wait for it to finish.voidscheduleAllBlockReport(long delay) This methods arranges for the data node to send the block report at the next heartbeat.static voidsecureMain(String[] args, SecureDataNodeStarter.SecureResources resources) voidsetHeartbeatsDisabledForTests(boolean heartbeatsDisabledForTests) voidshutdown()Shut down this instance of the datanode.voidshutdownDatanode(boolean forUpgrade) protected voidStart a timer to periodically write DataNode metrics to the log file.voidprotected voidvoidsubmitDiskBalancerPlan(String planID, long planVersion, String planFile, String planData, boolean skipDateCheck) Allows submission of a disk balancer Job.toString()voidtriggerBlockReport(org.apache.hadoop.hdfs.client.BlockReportOptions options) updateReplicaUnderRecovery(org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, long recoveryId, long newBlockId, long newLength) Update replica with the new generation stamp and length.Methods inherited from class org.apache.hadoop.conf.ReconfigurableBase
getChangedProperties, getReconfigurationTaskStatus, isPropertyReconfigurable, reconfigureProperty, setReconfigurationUtil, shutdownReconfigurationTask, startReconfigurationTaskMethods inherited from class org.apache.hadoop.conf.Configured
getConf, setConfMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, waitMethods inherited from interface org.apache.hadoop.conf.Configurable
getConf, setConf
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG -
DN_CLIENTTRACE_FORMAT
- See Also:
-
MAX_VOLUME_FAILURE_TOLERATED_LIMIT
public static final int MAX_VOLUME_FAILURE_TOLERATED_LIMIT- See Also:
-
MAX_VOLUME_FAILURES_TOLERATED_MSG
- See Also:
-
METRICS_LOG_NAME
- See Also:
-
ipcServer
public org.apache.hadoop.ipc.RPC.Server ipcServer
-
-
Method Details
-
createSocketAddr
Deprecated.UseNetUtils.createSocketAddr(String)instead. -
getNewConf
protected org.apache.hadoop.conf.Configuration getNewConf()- Specified by:
getNewConfin classorg.apache.hadoop.conf.ReconfigurableBase
-
reconfigurePropertyImpl
public String reconfigurePropertyImpl(String property, String newVal) throws org.apache.hadoop.conf.ReconfigurationException .- Specified by:
reconfigurePropertyImplin classorg.apache.hadoop.conf.ReconfigurableBase- Throws:
org.apache.hadoop.conf.ReconfigurationException
-
getReconfigurableProperties
Get a list of the keys of the re-configurable properties in configuration.- Specified by:
getReconfigurablePropertiesin interfaceorg.apache.hadoop.conf.Reconfigurable- Specified by:
getReconfigurablePropertiesin classorg.apache.hadoop.conf.ReconfigurableBase
-
getECN
public org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.ECN getECN()The ECN bit for the DataNode. The DataNode should return:- ECN.DISABLED when ECN is disabled.
- ECN.SUPPORTED when ECN is enabled but the DN still has capacity.
- ECN.CONGESTED when ECN is enabled and the DN is congested.
-
getSLOWByBlockPoolId
public org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.SLOW getSLOWByBlockPoolId(String bpId) The SLOW bit for the DataNode of the specific BlockPool. The DataNode should return:- SLOW.DISABLED when SLOW is disabled
- SLOW.NORMAL when SLOW is enabled and DN is not slownode.
- SLOW.SLOW when SLOW is enabled and DN is slownode.
-
getFileIoProvider
-
notifyNamenodeReceivedBlock
-
notifyNamenodeReceivingBlock
protected void notifyNamenodeReceivingBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid) -
notifyNamenodeDeletedBlock
public void notifyNamenodeDeletedBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, String storageUuid) Notify the corresponding namenode to delete the block. -
reportBadBlocks
Report a bad block which is hosted on the local DN.- Throws:
IOException
-
reportBadBlocks
public void reportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi volume) throws IOException Report a bad block which is hosted on the local DN.- Parameters:
block- the bad block which is hosted on the local DNvolume- the volume that block is stored in and the volume must not be null- Throws:
IOException
-
reportRemoteBadBlock
public void reportRemoteBadBlock(org.apache.hadoop.hdfs.protocol.DatanodeInfo srcDataNode, org.apache.hadoop.hdfs.protocol.ExtendedBlock block) throws IOException Report a bad block on another DN (eg if we received a corrupt replica from a remote host).- Parameters:
srcDataNode- the DN hosting the bad blockblock- the block itself- Throws:
IOException
-
reportCorruptedBlocks
public void reportCorruptedBlocks(org.apache.hadoop.hdfs.DFSUtilClient.CorruptedBlocks corruptedBlocks) throws IOException - Throws:
IOException
-
setHeartbeatsDisabledForTests
@VisibleForTesting public void setHeartbeatsDisabledForTests(boolean heartbeatsDisabledForTests) -
generateUuid
-
getSaslClient
public org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient getSaslClient() -
getBpOsCount
public int getBpOsCount() -
getInfoAddr
Determine the http server's effective addr -
getXferServer
@VisibleForTesting public org.apache.hadoop.hdfs.server.datanode.DataXceiverServer getXferServer() -
getXferPort
@VisibleForTesting public int getXferPort() -
getSaslServer
-
getDisplayName
- Returns:
- name useful for logging or display
-
getXferAddress
NB: The datanode can perform data transfer on the streaming address however clients are given the IPC IP address for data transfer, and that may be a different address.- Returns:
- socket address for data transfer
-
getIpcPort
public int getIpcPort()- Returns:
- the datanode's IPC port
-
getDNRegistrationForBP
@VisibleForTesting public DatanodeRegistration getDNRegistrationForBP(String bpid) throws IOException get BP registration by blockPool id- Returns:
- BP registration object
- Throws:
IOException- on error
-
newSocket
Creates either NIO or regular depending on socketWriteTimeout.- Throws:
IOException
-
createInterDataNodeProtocolProxy
public static InterDatanodeProtocol createInterDataNodeProtocolProxy(org.apache.hadoop.hdfs.protocol.DatanodeID datanodeid, org.apache.hadoop.conf.Configuration conf, int socketTimeout, boolean connectToDnViaHostname) throws IOException - Throws:
IOException
-
getMetrics
-
getDiskMetrics
-
getPeerMetrics
-
getMaxNumberOfBlocksToLog
public long getMaxNumberOfBlocksToLog() -
getBlockLocalPathInfo
public org.apache.hadoop.hdfs.protocol.BlockLocalPathInfo getBlockLocalPathInfo(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier> token) throws IOException - Specified by:
getBlockLocalPathInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
shutdown
public void shutdown()Shut down this instance of the datanode. Returns only after shutdown is complete. This method can only be called by the offerService thread. Otherwise, deadlock might occur. -
checkDiskErrorAsync
Check if there is a disk failure asynchronously and if so, handle the error. -
getXceiverCount
public int getXceiverCount()Number of concurrent xceivers per node.- Specified by:
getXceiverCountin interfaceDataNodeMXBean
-
getActiveTransferThreadCount
public int getActiveTransferThreadCount()Description copied from interface:DataNodeMXBeanReturns the number of Datanode threads actively transferring blocks.- Specified by:
getActiveTransferThreadCountin interfaceDataNodeMXBean
-
getDatanodeNetworkCounts
Description copied from interface:DataNodeMXBeanGets the network error counts on a per-Datanode basis.- Specified by:
getDatanodeNetworkCountsin interfaceDataNodeMXBean
-
getXmitsInProgress
public int getXmitsInProgress()Description copied from interface:DataNodeMXBeanReturns an estimate of the number of data replication/reconstruction tasks running currently.- Specified by:
getXmitsInProgressin interfaceDataNodeMXBean
-
incrementXmitsInProgress
public void incrementXmitsInProgress()Increments the xmitsInProgress count. xmitsInProgress count represents the number of data replication/reconstruction tasks running currently. -
incrementXmitsInProcess
public void incrementXmitsInProcess(int delta) Increments the xmitInProgress count by given value.- Parameters:
delta- the amount of xmitsInProgress to increase.- See Also:
-
decrementXmitsInProgress
public void decrementXmitsInProgress()Decrements the xmitsInProgress count -
decrementXmitsInProgress
public void decrementXmitsInProgress(int delta) Decrements the xmitsInProgress count by given value.- See Also:
-
getBlockAccessToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier> getBlockAccessToken(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, EnumSet<org.apache.hadoop.hdfs.security.token.block.BlockTokenIdentifier.AccessMode> mode, org.apache.hadoop.fs.StorageType[] storageTypes, String[] storageIds) throws IOException Use BlockTokenSecretManager to generate block token for current user.- Throws:
IOException
-
getDataEncryptionKeyFactoryForBlock
public org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataEncryptionKeyFactory getDataEncryptionKeyFactoryForBlock(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) Returns a new DataEncryptionKeyFactory that generates a key from the BlockPoolTokenSecretManager, using the block pool ID of the given block.- Parameters:
block- for which the factory needs to create a key- Returns:
- DataEncryptionKeyFactory for block's block pool ID
-
runDatanodeDaemon
Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.- Throws:
IOException
-
isDatanodeUp
public boolean isDatanodeUp()A data node is considered to be up if one of the bp services is up -
instantiateDataNode
public static DataNode instantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException Instantiate a single datanode object. This must be run by invokingrunDatanodeDaemon()subsequently.- Throws:
IOException
-
instantiateDataNode
public static DataNode instantiateDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) throws IOException Instantiate a single datanode object, along with its secure resources. This must be run by invokingrunDatanodeDaemon()subsequently.- Throws:
IOException
-
getStorageLocations
-
createDataNode
@VisibleForTesting public static DataNode createDataNode(String[] args, org.apache.hadoop.conf.Configuration conf) throws IOException Instantiate & Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.- Throws:
IOException
-
createDataNode
@VisibleForTesting @Private public static DataNode createDataNode(String[] args, org.apache.hadoop.conf.Configuration conf, SecureDataNodeStarter.SecureResources resources) throws IOException Instantiate & Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted, it will stop waiting.- Throws:
IOException
-
toString
-
scheduleAllBlockReport
public void scheduleAllBlockReport(long delay) This methods arranges for the data node to send the block report at the next heartbeat. -
getFSDataset
Examples are adding and deleting blocks directly. The most common usage will be when the data node's storage is simulated.- Returns:
- the fsdataset that stores the blocks
-
getBlockScanner
-
getBlockPoolTokenSecretManager
-
secureMain
-
main
-
initReplicaRecovery
public ReplicaRecoveryInfo initReplicaRecovery(BlockRecoveryCommand.RecoveringBlock rBlock) throws IOException Description copied from interface:InterDatanodeProtocolInitialize a replica recovery.- Specified by:
initReplicaRecoveryin interfaceInterDatanodeProtocol- Returns:
- actual state of the replica on this data-node or null if data-node does not have the replica.
- Throws:
IOException
-
updateReplicaUnderRecovery
public String updateReplicaUnderRecovery(org.apache.hadoop.hdfs.protocol.ExtendedBlock oldBlock, long recoveryId, long newBlockId, long newLength) throws IOException Update replica with the new generation stamp and length.- Specified by:
updateReplicaUnderRecoveryin interfaceInterDatanodeProtocol- Throws:
IOException
-
getReplicaVisibleLength
public long getReplicaVisibleLength(org.apache.hadoop.hdfs.protocol.ExtendedBlock block) throws IOException - Specified by:
getReplicaVisibleLengthin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
getSoftwareVersion
Description copied from interface:DataNodeMXBeanGet the version of software running on the DataNode- Specified by:
getSoftwareVersionin interfaceDataNodeMXBean- Returns:
- a string representing the version
-
getVersion
Description copied from interface:DataNodeMXBeanGets the version of Hadoop.- Specified by:
getVersionin interfaceDataNodeMXBean- Returns:
- the version of Hadoop
-
getRpcPort
Description copied from interface:DataNodeMXBeanGets the rpc port.- Specified by:
getRpcPortin interfaceDataNodeMXBean- Returns:
- the rpc port
-
getDataPort
Description copied from interface:DataNodeMXBeanGets the data port.- Specified by:
getDataPortin interfaceDataNodeMXBean- Returns:
- the data port
-
getHttpPort
Description copied from interface:DataNodeMXBeanGets the http port.- Specified by:
getHttpPortin interfaceDataNodeMXBean- Returns:
- the http port
-
getDNStartedTimeInMillis
public long getDNStartedTimeInMillis()Description copied from interface:DataNodeMXBeanGet the start time of the DataNode.- Specified by:
getDNStartedTimeInMillisin interfaceDataNodeMXBean- Returns:
- Start time of the DataNode.
-
getRevision
-
getInfoPort
public int getInfoPort()- Returns:
- the datanode's http port
-
getInfoSecurePort
public int getInfoSecurePort()- Returns:
- the datanode's https port
-
getNamenodeAddresses
Returned information is a JSON representation of a map with name node host name as the key and block pool Id as the value. Note that, if there are multiple NNs in an NA nameservice, a given block pool may be represented twice.- Specified by:
getNamenodeAddressesin interfaceDataNodeMXBean- Returns:
- the namenode IP addresses that the datanode is talking to
-
getDatanodeHostname
Return hostname of the datanode.- Specified by:
getDatanodeHostnamein interfaceDataNodeMXBean- Returns:
- the datanode hostname for the datanode.
-
getBPServiceActorInfo
Returned information is a JSON representation of an array, each element of the array is a map contains the information about a block pool service actor.- Specified by:
getBPServiceActorInfoin interfaceDataNodeMXBean- Returns:
- block pool service actors info
-
getBPServiceActorInfoMap
-
getVolumeInfo
Returned information is a JSON representation of a map with volume name as the key and value is a map of volume attribute keys to its values- Specified by:
getVolumeInfoin interfaceDataNodeMXBean- Returns:
- the volume info
-
getClusterId
Description copied from interface:DataNodeMXBeanGets the cluster id.- Specified by:
getClusterIdin interfaceDataNodeMXBean- Returns:
- the cluster id
-
getDiskBalancerStatus
Description copied from interface:DataNodeMXBeanGets the diskBalancer Status. Please see implementation for the format of the returned information.- Specified by:
getDiskBalancerStatusin interfaceDataNodeMXBean- Returns:
- DiskBalancer Status
-
isSecurityEnabled
public boolean isSecurityEnabled()Description copied from interface:DataNodeMXBeanGets if security is enabled.- Specified by:
isSecurityEnabledin interfaceDataNodeMXBean- Returns:
- true, if security is enabled.
-
refreshNamenodes
- Throws:
IOException
-
refreshNamenodes
- Specified by:
refreshNamenodesin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
deleteBlockPool
- Specified by:
deleteBlockPoolin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
shutdownDatanode
- Specified by:
shutdownDatanodein interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
evictWriters
- Specified by:
evictWritersin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
getDatanodeInfo
public org.apache.hadoop.hdfs.protocol.DatanodeLocalInfo getDatanodeInfo()- Specified by:
getDatanodeInfoin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
-
startReconfiguration
- Specified by:
startReconfigurationin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Specified by:
startReconfigurationin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
getReconfigurationStatus
public org.apache.hadoop.conf.ReconfigurationTaskStatus getReconfigurationStatus() throws IOException- Specified by:
getReconfigurationStatusin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Specified by:
getReconfigurationStatusin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
listReconfigurableProperties
- Specified by:
listReconfigurablePropertiesin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Specified by:
listReconfigurablePropertiesin interfaceorg.apache.hadoop.hdfs.protocol.ReconfigurationProtocol- Throws:
IOException
-
triggerBlockReport
public void triggerBlockReport(org.apache.hadoop.hdfs.client.BlockReportOptions options) throws IOException - Specified by:
triggerBlockReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
isConnectedToNN
- Parameters:
addr- rpc address of the namenode- Returns:
- true if the datanode is connected to a NameNode at the given address
-
isBPServiceAlive
- Parameters:
bpid- block pool Id- Returns:
- true - if BPOfferService thread is alive
-
isDatanodeFullyStarted
public boolean isDatanodeFullyStarted()A datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized.- Returns:
- true - if the data node is fully started
-
isDatanodeFullyStarted
public boolean isDatanodeFullyStarted(boolean checkConnectionToActiveNamenode) A datanode is considered to be fully started if all the BP threads are alive and all the block pools are initialized. If checkConnectionToActiveNamenode is true, the datanode is considered to be fully started if it is also heartbeating to active namenode in addition to the above-mentioned conditions.- Parameters:
checkConnectionToActiveNamenode- if true, performs additional check of whether datanode is heartbeating to active namenode.- Returns:
- true if the datanode is fully started and also conditionally connected to active namenode, false otherwise.
-
getDatanodeId
@VisibleForTesting public org.apache.hadoop.hdfs.protocol.DatanodeID getDatanodeId() -
clearAllBlockSecretKeys
@VisibleForTesting public void clearAllBlockSecretKeys() -
getBalancerBandwidth
public long getBalancerBandwidth()- Specified by:
getBalancerBandwidthin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol
-
getDnConf
-
getDatanodeUuid
-
getShortCircuitRegistry
-
getEcReconstuctReadThrottler
-
getEcReconstuctWriteThrottler
-
checkDiskError
Check the disk error synchronously.- Throws:
IOException
-
handleVolumeFailures
-
getLastDiskErrorCheck
@VisibleForTesting public long getLastDiskErrorCheck() -
getBlockRecoveryWorker
-
getErasureCodingWorker
-
getOOBTimeout
public long getOOBTimeout(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.Status status) throws IOException Get the timeout to be used for transmitting the OOB type- Returns:
- the timeout in milliseconds
- Throws:
IOException
-
startMetricsLogger
protected void startMetricsLogger()Start a timer to periodically write DataNode metrics to the log file. This behavior can be disabled by configuration. -
stopMetricsLogger
protected void stopMetricsLogger() -
getTracer
public org.apache.hadoop.tracing.Tracer getTracer() -
submitDiskBalancerPlan
public void submitDiskBalancerPlan(String planID, long planVersion, String planFile, String planData, boolean skipDateCheck) throws IOException Allows submission of a disk balancer Job.- Specified by:
submitDiskBalancerPlanin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Parameters:
planID- - Hash value of the plan.planVersion- - Plan version, reserved for future use. We have only version 1 now.planFile- - Plan file nameplanData- - Actual plan data in json format- Throws:
IOException
-
cancelDiskBalancePlan
Cancels a running plan.- Specified by:
cancelDiskBalancePlanin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Parameters:
planID- - Hash string that identifies a plan- Throws:
IOException
-
queryDiskBalancerPlan
public org.apache.hadoop.hdfs.server.datanode.DiskBalancerWorkStatus queryDiskBalancerPlan() throws IOExceptionReturns the status of current or last executed work plan.- Specified by:
queryDiskBalancerPlanin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Returns:
- DiskBalancerWorkStatus.
- Throws:
IOException
-
getDiskBalancerSetting
Gets a runtime configuration value from diskbalancer instance. For example : DiskBalancer bandwidth.- Specified by:
getDiskBalancerSettingin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Parameters:
key- - String that represents the run time key value.- Returns:
- value of the key as a string.
- Throws:
IOException- - Throws if there is no such key
-
getSendPacketDownstreamAvgInfo
Description copied from interface:DataNodeMXBeanGets the average info (e.g. time) of SendPacketDownstream when the DataNode acts as the penultimate (2nd to the last) node in pipeline.Example Json: {"[185.164.159.81:9801]RollingAvgTime":504.867, "[49.236.149.246:9801]RollingAvgTime":504.463, "[84.125.113.65:9801]RollingAvgTime":497.954}
- Specified by:
getSendPacketDownstreamAvgInfoin interfaceDataNodeMXBean
-
getSlowDisks
Description copied from interface:DataNodeMXBeanGets the slow disks in the Datanode.- Specified by:
getSlowDisksin interfaceDataNodeMXBean- Returns:
- list of slow disks
-
getVolumeReport
public List<org.apache.hadoop.hdfs.protocol.DatanodeVolumeInfo> getVolumeReport() throws IOException- Specified by:
getVolumeReportin interfaceorg.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol- Throws:
IOException
-
getDiskBalancer
- Throws:
IOException
-
getDataSetLockManager
-
getBlockPoolManager
@VisibleForTesting public org.apache.hadoop.hdfs.server.datanode.BlockPoolManager getBlockPoolManager()
-