Class BlockInfo
java.lang.Object
org.apache.hadoop.hdfs.protocol.Block
org.apache.hadoop.hdfs.server.blockmanagement.BlockInfo
- All Implemented Interfaces:
Comparable<org.apache.hadoop.hdfs.protocol.Block>,org.apache.hadoop.io.Writable,org.apache.hadoop.util.LightWeightGSet.LinkedElement
- Direct Known Subclasses:
BlockInfoContiguous,BlockInfoStriped
@Private
public abstract class BlockInfo
extends org.apache.hadoop.hdfs.protocol.Block
implements org.apache.hadoop.util.LightWeightGSet.LinkedElement
For a given block (or an erasure coding block group), BlockInfo class
maintains 1) the
BlockCollection it is part of, and 2) datanodes
where the replicas of the block, or blocks belonging to the erasure coding
block group, are stored.-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final BlockInfo[]protected Object[]This array contains triplets of references.Fields inherited from class org.apache.hadoop.hdfs.protocol.Block
BLOCK_FILE_PREFIX, blockFilePattern, METADATA_EXTENSION, metaFilePattern, metaOrBlockFilePattern -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionvoidAdd/Update the under construction feature.voiddelete()booleanlongabstract org.apache.hadoop.hdfs.protocol.BlockTypeintgetDatanode(int index) org.apache.hadoop.util.LightWeightGSet.LinkedElementgetNext()shortinthashCode()booleanIs this block complete?final booleanbooleanabstract booleanbooleanmoveBlockToHead(BlockInfo head, DatanodeStorageInfo storage, int curIndex, int headIndex) Remove this block from the list of blocks related to the specified DatanodeDescriptor.abstract intnumNodes()Count the number of data-nodes the block currently belongs to (i.e., NN has received block reports from the DN).voidsetBlockCollectionId(long id) List<org.apache.hadoop.hdfs.server.blockmanagement.ReplicaUnderConstruction>setGenerationStampAndVerifyReplicas(long genStamp) Process the recorded replicas.voidsetNext(org.apache.hadoop.util.LightWeightGSet.LinkedElement next) voidsetReplication(short repl) Methods inherited from class org.apache.hadoop.hdfs.protocol.Block
appendStringTo, compareTo, filename2id, getBlockId, getBlockId, getBlockName, getGenerationStamp, getGenerationStamp, getNumBytes, isBlockFilename, isMetaFilename, matchingIdAndGenStamp, metaToBlockFile, readFields, readId, set, setBlockId, setGenerationStamp, setNumBytes, toString, toString, write, writeId
-
Field Details
-
EMPTY_ARRAY
-
triplets
This array contains triplets of references. For each i-th storage, the block belongs to triplets[3*i] is the reference to theDatanodeStorageInfoand triplets[3*i+1] and triplets[3*i+2] are references to the previous and the next blocks, respectively, in the list of blocks belonging to this storage. Using previous and next in Object triplets is done instead of aLinkedListlist to efficiently use memory. With LinkedList the cost per replica is 42 bytes (LinkedList#Entry object per replica) versus 16 bytes using the triplets.
-
-
Constructor Details
-
BlockInfo
public BlockInfo(short size) Construct an entry for blocksmap- Parameters:
size- the block's replication factor, or the total number of blocks in the block group
-
BlockInfo
public BlockInfo(org.apache.hadoop.hdfs.protocol.Block blk, short size)
-
-
Method Details
-
getReplication
public short getReplication() -
setReplication
public void setReplication(short repl) -
getBlockCollectionId
public long getBlockCollectionId() -
setBlockCollectionId
public void setBlockCollectionId(long id) -
delete
public void delete() -
isDeleted
public boolean isDeleted() -
getStorageInfos
-
getDatanode
-
getCapacity
public int getCapacity() -
numNodes
public abstract int numNodes()Count the number of data-nodes the block currently belongs to (i.e., NN has received block reports from the DN). -
isStriped
public abstract boolean isStriped() -
getBlockType
public abstract org.apache.hadoop.hdfs.protocol.BlockType getBlockType() -
moveBlockToHead
public BlockInfo moveBlockToHead(BlockInfo head, DatanodeStorageInfo storage, int curIndex, int headIndex) Remove this block from the list of blocks related to the specified DatanodeDescriptor. Insert it into the head of the list of blocks.- Returns:
- the new head of the list.
-
hashCode
public int hashCode()- Overrides:
hashCodein classorg.apache.hadoop.hdfs.protocol.Block
-
equals
- Overrides:
equalsin classorg.apache.hadoop.hdfs.protocol.Block
-
getNext
public org.apache.hadoop.util.LightWeightGSet.LinkedElement getNext()- Specified by:
getNextin interfaceorg.apache.hadoop.util.LightWeightGSet.LinkedElement
-
setNext
public void setNext(org.apache.hadoop.util.LightWeightGSet.LinkedElement next) - Specified by:
setNextin interfaceorg.apache.hadoop.util.LightWeightGSet.LinkedElement
-
getUnderConstructionFeature
-
getBlockUCState
-
isComplete
public boolean isComplete()Is this block complete?- Returns:
- true if the state of the block is
HdfsServerConstants.BlockUCState.COMPLETE
-
isUnderRecovery
public boolean isUnderRecovery() -
isCompleteOrCommitted
public final boolean isCompleteOrCommitted() -
convertToBlockUnderConstruction
public void convertToBlockUnderConstruction(HdfsServerConstants.BlockUCState s, DatanodeStorageInfo[] targets) Add/Update the under construction feature. -
setGenerationStampAndVerifyReplicas
public List<org.apache.hadoop.hdfs.server.blockmanagement.ReplicaUnderConstruction> setGenerationStampAndVerifyReplicas(long genStamp) Process the recorded replicas. When about to commit or finish the pipeline recovery sort out bad replicas.- Parameters:
genStamp- The final generation stamp for the block.- Returns:
- staleReplica's List.
-