Uses of Class
org.apache.hadoop.hdfs.protocol.DatanodeInfo
Packages that use DatanodeInfo
Package
Description
This package provides the administrative APIs for HDFS.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
Package contains classes that allows HDFS to communicate information b/w
DataNode and NameNode.
-
Uses of DatanodeInfo in org.apache.hadoop.hdfs
Fields in org.apache.hadoop.hdfs with type parameters of type DatanodeInfoModifier and TypeFieldDescriptionprotected final org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> DataStreamer.excludedNodesprotected final org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> DataStreamer.excludedNodesMethods in org.apache.hadoop.hdfs that return DatanodeInfoModifier and TypeMethodDescriptionDFSClient.datanodeReport(HdfsConstants.DatanodeReportType type) DFSInputStream.getCurrentDatanode()Returns the datanode from which the stream is currently reading.DistributedFileSystem.getDataNodeStats()DistributedFileSystem.getDataNodeStats(HdfsConstants.DatanodeReportType type) ViewDistributedFileSystem.getDataNodeStats()ViewDistributedFileSystem.getDataNodeStats(HdfsConstants.DatanodeReportType type) DFSOutputStream.getPipeline()DistributedFileSystem.getSlowDatanodeStats()Retrieve stats for slow running datanodes.ViewDistributedFileSystem.getSlowDatanodeStats()DFSClient.slowDatanodeReport()Methods in org.apache.hadoop.hdfs that return types with arguments of type DatanodeInfoModifier and TypeMethodDescriptionDeadNodeDetector.clearAndGetDetectedDeadNodes()Remove dead node which is not used by any DFSInputStream from deadNodes.DFSUtilClient.CorruptedBlocks.getCorruptionMap()DFSClient.getDeadNodes(DFSInputStream dfsInputStream) If deadNodeDetectionEnabled is true, return the dead nodes that detected by all the DFSInputStreams in the same client.DFSClient.getDeadNodes(DFSInputStream dfsInputStream) If deadNodeDetectionEnabled is true, return the dead nodes that detected by all the DFSInputStreams in the same client.org.apache.hadoop.hdfs.DeadNodeDetector.UniqueQueue<DatanodeInfo>DeadNodeDetector.getDeadNodesProbeQueue()protected ConcurrentHashMap<DatanodeInfo,DatanodeInfo> DFSInputStream.getLocalDeadNodes()protected ConcurrentHashMap<DatanodeInfo,DatanodeInfo> DFSInputStream.getLocalDeadNodes()org.apache.hadoop.hdfs.DeadNodeDetector.UniqueQueue<DatanodeInfo>DeadNodeDetector.getSuspectNodesProbeQueue()Methods in org.apache.hadoop.hdfs with parameters of type DatanodeInfoModifier and TypeMethodDescriptionvoidDFSUtilClient.CorruptedBlocks.addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node) Indicate a block replica on the specified datanode is corruptedvoidDFSClient.addNodeToDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Add given datanode in DeadNodeDetector.voidDeadNodeDetector.addNodeToDetect(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Add datanode to suspectNodes and suspectAndDeadNodes.protected voidDFSInputStream.addToLocalDeadNodes(DatanodeInfo dnInfo) protected IOStreamPairDFSClient.connectToDN(DatanodeInfo dn, int timeout, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) static IOStreamPairDFSUtilClient.connectToDN(DatanodeInfo dn, int timeout, org.apache.hadoop.conf.Configuration conf, SaslDataTransferClient saslClient, SocketFactory socketFactory, boolean connectToDnViaHostname, DataEncryptionKeyFactory dekFactory, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) Connect to the given datanode's datantrasfer port, and return the resulting IOStreamPair.protected BlockReaderDFSInputStream.getBlockReader(LocatedBlock targetBlock, long offsetInBlock, long length, InetSocketAddress targetAddr, org.apache.hadoop.fs.StorageType storageType, DatanodeInfo datanode) intClientContext.getNetworkDistance(DatanodeInfo datanodeInfo) protected org.apache.hadoop.util.DataChecksum.TypeDFSClient.inferChecksumTypeByReading(LocatedBlock lb, DatanodeInfo dn) Infer the checksum type for a replica by sending an OP_READ_BLOCK for the first byte of that replica.booleanDeadNodeDetector.isDeadNode(DatanodeInfo datanodeInfo) booleanDFSClient.isDeadNode(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) If deadNodeDetectionEnabled is true, judgement based on whether this datanode is included or not in DeadNodeDetector.protected voidDFSInputStream.removeFromLocalDeadNodes(DatanodeInfo dnInfo) voidDeadNodeDetector.removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Remove suspect and dead node from suspectAndDeadNodes#dfsInputStream and local deadNodes.voidDFSClient.removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) Remove given datanode from DeadNodeDetector.protected booleanStripedDataStreamer.setupPipelineInternal(DatanodeInfo[] nodes, org.apache.hadoop.fs.StorageType[] nodeStorageTypes, String[] nodeStorageIDs) Method parameters in org.apache.hadoop.hdfs with type arguments of type DatanodeInfoModifier and TypeMethodDescriptionprotected org.apache.hadoop.hdfs.DFSInputStream.DNAddrPairDFSInputStream.getBestNodeDNAddrPair(LocatedBlock block, Collection<DatanodeInfo> ignoredNodes) Get the best node from which to stream the data.protected voidDFSInputStream.reportLostBlock(LocatedBlock lostBlock, Collection<DatanodeInfo> ignoredNodes) Warn the user of a lost blockprotected voidDFSStripedInputStream.reportLostBlock(LocatedBlock lostBlock, Collection<DatanodeInfo> ignoredNodes) -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.client
Methods in org.apache.hadoop.hdfs.client that return DatanodeInfoModifier and TypeMethodDescriptionHdfsDataInputStream.getCurrentDatanode()Get the datanode from which the stream is currently reading. -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.client.impl
Methods in org.apache.hadoop.hdfs.client.impl with parameters of type DatanodeInfo -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.protocol
Subclasses of DatanodeInfo in org.apache.hadoop.hdfs.protocolFields in org.apache.hadoop.hdfs.protocol declared as DatanodeInfoMethods in org.apache.hadoop.hdfs.protocol that return DatanodeInfoModifier and TypeMethodDescriptionDatanodeInfo.DatanodeInfoBuilder.build()LocatedBlock.getCachedLocations()ClientProtocol.getDatanodeReport(HdfsConstants.DatanodeReportType type) Get a report on the system's current datanodes.StripedBlockInfo.getDatanodes()ClientProtocol.getSlowDatanodeReport()Get report on all of the slow Datanodes.Methods in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfoModifier and TypeMethodDescriptionClientProtocol.addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock().voidLocatedBlock.addCachedLoc(DatanodeInfo loc) Add a the location of a cached replica of the block.ClientProtocol.getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) Get a datanode for an existing pipeline.DatanodeInfo.DatanodeInfoBuilder.setFrom(DatanodeInfo from) Constructors in org.apache.hadoop.hdfs.protocol with parameters of type DatanodeInfoModifierConstructorDescriptionprotectedDatanodeInfo(DatanodeInfo from) DatanodeInfoWithStorage(DatanodeInfo from, String storageID, org.apache.hadoop.fs.StorageType storageType) LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs) LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs, String[] storageIDs, org.apache.hadoop.fs.StorageType[] storageTypes) LocatedBlock(ExtendedBlock b, DatanodeInfo[] locs, String[] storageIDs, org.apache.hadoop.fs.StorageType[] storageTypes, long startOffset, boolean corrupt, DatanodeInfo[] cachedLocs) LocatedBlock(ExtendedBlock b, DatanodeInfoWithStorage[] locs, String[] storageIDs, org.apache.hadoop.fs.StorageType[] storageTypes, long startOffset, boolean corrupt, DatanodeInfo[] cachedLocs) LocatedStripedBlock(ExtendedBlock b, DatanodeInfo[] locs, String[] storageIDs, org.apache.hadoop.fs.StorageType[] storageTypes, byte[] indices, long startOffset, boolean corrupt, DatanodeInfo[] cachedLocs) StripedBlockInfo(ExtendedBlock block, DatanodeInfo[] datanodes, org.apache.hadoop.security.token.Token<BlockTokenIdentifier>[] blockTokens, byte[] blockIndices, ErasureCodingPolicy ecPolicy) -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.protocol.datatransfer
Methods in org.apache.hadoop.hdfs.protocol.datatransfer with parameters of type DatanodeInfoModifier and TypeMethodDescriptionvoidDataTransferProtocol.replaceBlock(ExtendedBlock blk, org.apache.hadoop.fs.StorageType storageType, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String delHint, DatanodeInfo source, String storageId) Receive a block from a source datanode and then notifies the namenode to remove the copy from the original datanode.voidSender.replaceBlock(ExtendedBlock blk, org.apache.hadoop.fs.StorageType storageType, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String delHint, DatanodeInfo source, String storageId) booleanReplaceDatanodeOnFailure.satisfy(short replication, DatanodeInfo[] existings, boolean isAppend, boolean isHflushed) Does it need a replacement according to the policy?voidDataTransferProtocol.transferBlock(ExtendedBlock blk, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String clientName, DatanodeInfo[] targets, org.apache.hadoop.fs.StorageType[] targetStorageTypes, String[] targetStorageIDs) Transfer a block to another datanode.voidSender.transferBlock(ExtendedBlock blk, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String clientName, DatanodeInfo[] targets, org.apache.hadoop.fs.StorageType[] targetStorageTypes, String[] targetStorageIds) voidDataTransferProtocol.writeBlock(ExtendedBlock blk, org.apache.hadoop.fs.StorageType storageType, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String clientName, DatanodeInfo[] targets, org.apache.hadoop.fs.StorageType[] targetStorageTypes, DatanodeInfo source, BlockConstructionStage stage, int pipelineSize, long minBytesRcvd, long maxBytesRcvd, long latestGenerationStamp, org.apache.hadoop.util.DataChecksum requestedChecksum, CachingStrategy cachingStrategy, boolean allowLazyPersist, boolean pinning, boolean[] targetPinnings, String storageID, String[] targetStorageIDs) Write a block to a datanode pipeline.voidSender.writeBlock(ExtendedBlock blk, org.apache.hadoop.fs.StorageType storageType, org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken, String clientName, DatanodeInfo[] targets, org.apache.hadoop.fs.StorageType[] targetStorageTypes, DatanodeInfo source, BlockConstructionStage stage, int pipelineSize, long minBytesRcvd, long maxBytesRcvd, long latestGenerationStamp, org.apache.hadoop.util.DataChecksum requestedChecksum, CachingStrategy cachingStrategy, boolean allowLazyPersist, boolean pinning, boolean[] targetPinnings, String storageId, String[] targetStorageIds) -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.protocolPB
Methods in org.apache.hadoop.hdfs.protocolPB that return DatanodeInfoModifier and TypeMethodDescriptionstatic DatanodeInfo[]PBHelperClient.convert(List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto> list) static DatanodeInfoPBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto di) static DatanodeInfo[]PBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto[] di) static DatanodeInfo[]PBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfosProto datanodeInfosProto) ClientNamenodeProtocolTranslatorPB.getDatanodeReport(HdfsConstants.DatanodeReportType type) ClientNamenodeProtocolTranslatorPB.getSlowDatanodeReport()Methods in org.apache.hadoop.hdfs.protocolPB with parameters of type DatanodeInfoModifier and TypeMethodDescriptionClientNamenodeProtocolTranslatorPB.addBlock(String src, String clientName, ExtendedBlock previous, DatanodeInfo[] excludeNodes, long fileId, String[] favoredNodes, EnumSet<AddBlockFlag> addBlockFlags) static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProtoPBHelperClient.convert(DatanodeInfo info) static List<? extends org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto>PBHelperClient.convert(DatanodeInfo[] dnInfos) static List<? extends org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto>PBHelperClient.convert(DatanodeInfo[] dnInfos, int startIdx) Copy fromdnInfosto a target of list of same size starting atstartIdx.static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProtoPBHelperClient.convertDatanodeInfo(DatanodeInfo di) static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfosProtoPBHelperClient.convertToProto(DatanodeInfo[] datanodeInfos) ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(String src, long fileId, ExtendedBlock blk, DatanodeInfo[] existings, String[] existingStorageIDs, DatanodeInfo[] excludes, int numAdditionalNodes, String clientName) -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.server.protocol
Methods in org.apache.hadoop.hdfs.server.protocol that return DatanodeInfoConstructors in org.apache.hadoop.hdfs.server.protocol with parameters of type DatanodeInfoModifierConstructorDescriptionDatanodeStorageReport(DatanodeInfo datanodeInfo, StorageReport[] storageReports) -
Uses of DatanodeInfo in org.apache.hadoop.hdfs.shortcircuit
Methods in org.apache.hadoop.hdfs.shortcircuit with parameters of type DatanodeInfoModifier and TypeMethodDescriptionShortCircuitCache.allocShmSlot(DatanodeInfo datanode, DomainPeer peer, org.apache.commons.lang3.mutable.MutableBoolean usedPeer, ExtendedBlockId blockId, String clientName) Allocate a new shared memory slot.DfsClientShmManager.allocSlot(DatanodeInfo datanode, DomainPeer peer, org.apache.commons.lang3.mutable.MutableBoolean usedPeer, ExtendedBlockId blockId, String clientName) Method parameters in org.apache.hadoop.hdfs.shortcircuit with type arguments of type DatanodeInfoModifier and TypeMethodDescriptionvoidDfsClientShmManager.Visitor.visit(HashMap<DatanodeInfo, DfsClientShmManager.PerDatanodeVisitorInfo> info)