Uses of Class
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor
Packages that use DatanodeDescriptor
Package
Description
-
Uses of DatanodeDescriptor in org.apache.hadoop.hdfs.server.blockmanagement
Subclasses of DatanodeDescriptor in org.apache.hadoop.hdfs.server.blockmanagementModifier and TypeClassDescriptionstatic classAn abstract DatanodeDescriptor to track datanodes with provided storages.Fields in org.apache.hadoop.hdfs.server.blockmanagement declared as DatanodeDescriptorModifier and TypeFieldDescriptionstatic final DatanodeDescriptor[]DatanodeDescriptor.EMPTY_ARRAYMethods in org.apache.hadoop.hdfs.server.blockmanagement that return DatanodeDescriptorModifier and TypeMethodDescriptionprotected DatanodeDescriptorAvailableSpaceBlockPlacementPolicy.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNode) protected DatanodeDescriptorAvailableSpaceBlockPlacementPolicy.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNode, org.apache.hadoop.fs.StorageType type) protected DatanodeDescriptorAvailableSpaceRackFaultTolerantBlockPlacementPolicy.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNode) protected DatanodeDescriptorAvailableSpaceRackFaultTolerantBlockPlacementPolicy.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNode, org.apache.hadoop.fs.StorageType type) protected DatanodeDescriptorBlockPlacementPolicyDefault.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNodes) Choose a datanode from the given scope.protected DatanodeDescriptorBlockPlacementPolicyDefault.chooseDataNode(String scope, Collection<org.apache.hadoop.net.Node> excludedNodes, org.apache.hadoop.fs.StorageType type) Choose a datanode from the given scope with specified storage type.ProvidedStorageMap.chooseProvidedDatanode()Choose a datanode that reported a volume ofStorageTypePROVIDED.BlockInfo.getDatanode(int index) DatanodeDescriptor.CachedBlocksList.getDatanode()DatanodeManager.getDatanode(String datanodeUuid) Get a datanode descriptor given corresponding DatanodeUUIDDatanodeManager.getDatanode(org.apache.hadoop.hdfs.protocol.DatanodeID nodeID) Get data node by datanode ID.DatanodeManager.getDatanodeByHost(String host) DatanodeManager.getDatanodeByHostName(String hostname) DatanodeManager.getDatanodeByXferAddr(String host, int xferPort) DatanodeStorageInfo.getDatanodeDescriptor()Methods in org.apache.hadoop.hdfs.server.blockmanagement that return types with arguments of type DatanodeDescriptorModifier and TypeMethodDescriptionDatanodeManager.getAllSlowDataNodes()DatanodeAdminMonitorBase.getCancelledNodes()DatanodeAdminMonitorInterface.getCancelledNodes()BlockManager.getCorruptReplicas(org.apache.hadoop.hdfs.protocol.Block block) Get the replicas which are corrupt for a given block.DatanodeManager.getDatanodeListForReport(org.apache.hadoop.hdfs.protocol.HdfsConstants.DatanodeReportType type) For generating datanode reportsDatanodeManager.getDatanodeMap()DatanodeManager.getDatanodes()DatanodeManager.getDecommissioningNodes()DatanodeManager.getEnteringMaintenanceNodes()DatanodeAdminManager.getPendingNodes()DatanodeAdminMonitorBase.getPendingNodes()DatanodeAdminMonitorInterface.getPendingNodes()Methods in org.apache.hadoop.hdfs.server.blockmanagement with parameters of type DatanodeDescriptorModifier and TypeMethodDescriptionprotected intBlockPlacementPolicyDefault.addToExcludedNodes(DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes) Add localMachine and related nodes to excludedNodes for next replica choosing.protected intBlockPlacementPolicyWithNodeGroup.addToExcludedNodes(DatanodeDescriptor chosenNode, Set<org.apache.hadoop.net.Node> excludedNodes) Find other nodes in the same nodegroup of localMachine and add them into excludeNodes as replica should not be duplicated for nodes within the same nodegroupprotected voidBlockPlacementPolicyDefault.chooseRemoteRack(int numOfReplicas, DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) Choose numOfReplicas nodes from the racks that localMachine is NOT on.protected voidBlockPlacementPolicyWithNodeGroup.chooseRemoteRack(int numOfReplicas, DatanodeDescriptor localMachine, Set<org.apache.hadoop.net.Node> excludedNodes, long blocksize, int maxReplicasPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) abstract List<DatanodeStorageInfo>BlockPlacementPolicy.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) Select the excess replica storages for deletion based on either delNodehint/Excess storage types.BlockPlacementPolicyDefault.chooseReplicasToDelete(Collection<DatanodeStorageInfo> availableReplicas, Collection<DatanodeStorageInfo> delCandidates, int expectedNumOfReplicas, List<org.apache.hadoop.fs.StorageType> excessTypes, DatanodeDescriptor addedNode, DatanodeDescriptor delNodeHint) BlockManager.chooseTarget4WebHDFS(String src, DatanodeDescriptor clientnode, Set<org.apache.hadoop.net.Node> excludes, long blocksize) Choose target for WebHDFS redirection.protected intAvailableSpaceBlockPlacementPolicy.compareDataNode(DatanodeDescriptor a, DatanodeDescriptor b, boolean isBalanceLocal) Compare the two data nodes.protected intAvailableSpaceRackFaultTolerantBlockPlacementPolicy.compareDataNode(DatanodeDescriptor a, DatanodeDescriptor b) Compare the two data nodes.BlockManager.getCorruptReason(org.apache.hadoop.hdfs.protocol.Block block, DatanodeDescriptor node) Get reason for certain corrupted replicas for a given block and a given dn.booleanBlockManager.isExcess(DatanodeDescriptor dn, BlockInfo blk) protected booleanBlockPlacementPolicyWithUpgradeDomain.isGoodDatanode(DatanodeDescriptor node, int maxTargetPerRack, boolean considerLoad, List<DatanodeStorageInfo> results, boolean avoidStaleNodes) protected voidDatanodeAdminManager.logBlockReplicationInfo(BlockInfo block, BlockCollection bc, DatanodeDescriptor srcNode, NumberReplicas num, Iterable<DatanodeStorageInfo> storages) protected static voidBlockPlacementPolicyDefault.logNodeIsNotChosen(DatanodeDescriptor node, BlockPlacementPolicyDefault.NodeNotChosenReason reason, String reasonDetails) voidBlockManagerFaultInjector.removeBlockReportLease(DatanodeDescriptor node, long leaseId) voidProvidedStorageMap.removeDatanode(DatanodeDescriptor dnToRemove) voidBlockManager.removeStoredBlock(BlockInfo storedBlock, DatanodeDescriptor node) Modify (block-->datanode) map.voidBlockManagerFaultInjector.requestBlockReportLease(DatanodeDescriptor node, long leaseId) protected voidDatanodeAdminManager.setDecommissioned(DatanodeDescriptor dn) protected voidDatanodeAdminManager.setInMaintenance(DatanodeDescriptor dn) voidDatanodeAdminManager.startDecommission(DatanodeDescriptor node) Start decommissioning the specified datanode.voidDatanodeAdminManager.startMaintenance(DatanodeDescriptor node, long maintenanceExpireTimeInMS) Start maintenance of the specified datanode.voidDatanodeAdminMonitorBase.startTrackingNode(DatanodeDescriptor dn) Start tracking a node for decommission or maintenance.voidDatanodeAdminMonitorInterface.startTrackingNode(DatanodeDescriptor dn) voidDatanodeAdminManager.stopDecommission(DatanodeDescriptor node) Stop decommissioning the specified datanode.voidDatanodeAdminManager.stopMaintenance(DatanodeDescriptor node) Stop maintenance of the specified datanode.voidDatanodeAdminBackoffMonitor.stopTrackingNode(DatanodeDescriptor dn) Queue a node to be removed from tracking.voidDatanodeAdminDefaultMonitor.stopTrackingNode(DatanodeDescriptor dn) voidDatanodeAdminMonitorInterface.stopTrackingNode(DatanodeDescriptor dn) voidProvidedStorageMap.updateStorage(DatanodeDescriptor node, org.apache.hadoop.hdfs.server.protocol.DatanodeStorage storage) Method parameters in org.apache.hadoop.hdfs.server.blockmanagement with type arguments of type DatanodeDescriptorModifier and TypeMethodDescriptionprotected voidBlockPlacementPolicyDefault.chooseFavouredNodes(String src, int numOfReplicas, List<DatanodeDescriptor> favoredNodes, Set<org.apache.hadoop.net.Node> favoriteAndExcludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) protected voidBlockPlacementPolicyWithNodeGroup.chooseFavouredNodes(String src, int numOfReplicas, List<DatanodeDescriptor> favoredNodes, Set<org.apache.hadoop.net.Node> favoriteAndExcludedNodes, long blocksize, int maxNodesPerRack, List<DatanodeStorageInfo> results, boolean avoidStaleNodes, EnumMap<org.apache.hadoop.fs.StorageType, Integer> storageTypes) choose all good favored nodes as target.voidDatanodeManager.fetchDatanodes(List<DatanodeDescriptor> live, List<DatanodeDescriptor> dead, boolean removeDecommissionNode) Fetch live and dead datanodes. -
Uses of DatanodeDescriptor in org.apache.hadoop.hdfs.server.namenode
Methods in org.apache.hadoop.hdfs.server.namenode that return types with arguments of type DatanodeDescriptorModifier and TypeMethodDescriptionCachedBlock.getDatanodes(DatanodeDescriptor.CachedBlocksList.Type type) Get a list of the datanodes which this block is cached, planned to be cached, or planned to be uncached on.