Uses of Class
org.apache.hadoop.hdfs.server.datanode.ReplicaInfo
Packages that use ReplicaInfo
Package
Description
-
Uses of ReplicaInfo in org.apache.hadoop.hdfs.server.datanode
Subclasses of ReplicaInfo in org.apache.hadoop.hdfs.server.datanodeModifier and TypeClassDescriptionclassThis class is used for provided replicas that are finalized.classThis class describes a replica that has been finalized.classThis class is used for all replicas which are on local storage media and hence, are backed by files.classThis class defines a replica in a pipeline, which includes a persistent replica being written to by a dfs client or a temporary replica being replicated by a source datanode or being copied for the balancing purpose.classThis abstract class is used as a base class for provided replicas.classThis class represents replicas being written.classThis class represents replicas that are under block recovery It has a recovery id that is equal to the generation stamp that the replica will be bumped to after recovery The recovery id is used to handle multiple concurrent block recoveries.classThis class represents a replica that is waiting to be recovered.Methods in org.apache.hadoop.hdfs.server.datanode that return ReplicaInfoModifier and TypeMethodDescriptionReplicaBuilder.build()FinalizedProvidedReplica.getOriginalReplica()FinalizedReplica.getOriginalReplica()LocalReplicaInPipeline.getOriginalReplica()abstract ReplicaInfoReplicaInfo.getOriginalReplica()ReplicaUnderRecovery.getOriginalReplica()Get the original replica that's under recoveryReplicaWaitingToBeRecovered.getOriginalReplica()LocalReplicaInPipeline.getReplicaInfo()ReplicaInPipeline.getReplicaInfo()Methods in org.apache.hadoop.hdfs.server.datanode with parameters of type ReplicaInfoModifier and TypeMethodDescriptionReplicaBuilder.from(ReplicaInfo fromReplica) BlockPoolSliceStorage.getTrashDirectory(ReplicaInfo info) Get a target subdirectory under trash/ for a given block file that is being deleted.DataStorage.getTrashDirectoryForReplica(String bpid, ReplicaInfo info) If rolling upgrades are in progress then do not delete block files immediately.voidLocalReplicaInPipeline.moveReplicaFrom(ReplicaInfo oldReplicaInfo, File newBlkFile) Constructors in org.apache.hadoop.hdfs.server.datanode with parameters of type ReplicaInfo -
Uses of ReplicaInfo in org.apache.hadoop.hdfs.server.datanode.fsdataset
Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset that return ReplicaInfoModifier and TypeMethodDescriptionFsDatasetSpi.moveBlockAcrossStorage(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, org.apache.hadoop.fs.StorageType targetStorageType, String storageId) Move block from one storage to another storageFsDatasetSpi.moveBlockAcrossVolumes(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi destination) Moves a given block from one volume to another volume.Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset that return types with arguments of type ReplicaInfoModifier and TypeMethodDescriptionFsDatasetSpi.getFinalizedBlocks(String bpid) Gets a list of references to the finalized blocks for the given block pool. -
Uses of ReplicaInfo in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl
Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl that return ReplicaInfoModifier and TypeMethodDescriptionFsVolumeImpl.activateSavedReplica(String bpid, ReplicaInfo replicaInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica replicaState) FsVolumeImpl.hardLinkBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo) FsVolumeImpl.moveBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl with parameters of type ReplicaInfoModifier and TypeMethodDescriptionFsVolumeImpl.activateSavedReplica(String bpid, ReplicaInfo replicaInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.RamDiskReplicaTracker.RamDiskReplica replicaState) FsVolumeImpl.append(String bpid, ReplicaInfo replicaInfo, long newGS, long estimateBlockLen) FsVolumeImpl.convertTemporaryToRbw(org.apache.hadoop.hdfs.protocol.ExtendedBlock b, ReplicaInfo temp) File[]FsVolumeImpl.copyBlockToLazyPersistLocation(String bpId, long blockId, long genStamp, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) FsVolumeImpl.hardLinkBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo) FsVolumeImpl.moveBlockToTmpLocation(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, ReplicaInfo replicaInfo, int smallBufferSize, org.apache.hadoop.conf.Configuration conf) voidFsVolumeImpl.resolveDuplicateReplicas(String bpid, ReplicaInfo memBlockInfo, ReplicaInfo diskBlockInfo, org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaMap volumeMap) FsVolumeImpl.updateRURCopyOnTruncate(ReplicaInfo rur, String bpid, long newBlockId, long recoveryId, long newlength)