Uses of Interface
org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi
Packages that use FsVolumeSpi
Package
Description
Datanode support for running disk checks.
-
Uses of FsVolumeSpi in org.apache.hadoop.hdfs.protocol
Methods in org.apache.hadoop.hdfs.protocol that return FsVolumeSpi -
Uses of FsVolumeSpi in org.apache.hadoop.hdfs.server.datanode
Methods in org.apache.hadoop.hdfs.server.datanode that return FsVolumeSpiModifier and TypeMethodDescriptionDirectoryScanner.ScanInfoVolumeReport.getVolume()Replica.getVolume()Get the volume of replica.ReplicaInfo.getVolume()Methods in org.apache.hadoop.hdfs.server.datanode with parameters of type FsVolumeSpiModifier and TypeMethodDescriptionvoidFaultInjectorFileIoEvents.beforeFileIo(FsVolumeSpi volume, FileIoProvider.OPERATION op, long len) voidFaultInjectorFileIoEvents.beforeMetadataOp(FsVolumeSpi volume, FileIoProvider.OPERATION op) voidDataNode.checkDiskErrorAsync(FsVolumeSpi volume) Check if there is a disk failure asynchronously and if so, handle the error.booleanFileIoProvider.createFile(FsVolumeSpi volume, File f) Create a file.static FileDatanodeUtil.createFileWithExistsCheck(FsVolumeSpi volume, org.apache.hadoop.hdfs.protocol.Block b, File f, FileIoProvider fileIoProvider) Create a new file.booleanFileIoProvider.delete(FsVolumeSpi volume, File f) Delete a file.booleanFileIoProvider.deleteWithExistsCheck(FsVolumeSpi volume, File f) Delete a file, first checking to see if it exists.static booleanDatanodeUtil.dirNoFilesRecursive(FsVolumeSpi volume, File dir, FileIoProvider fileIoProvider) Checks whether there are any files anywhere in the directory tree rooted at dir (directories don't count as files). dir must existvoidFileIoProvider.dirSync(FsVolumeSpi volume, File dir) Sync the given directory changes to durable device.booleanFileIoProvider.exists(FsVolumeSpi volume, File f) Check for file existence usingFile.exists().voidFileIoProvider.flush(FsVolumeSpi volume, Flushable f) SeeFlushable.flush().booleanFileIoProvider.fullyDelete(FsVolumeSpi volume, File dir) Delete the given directory usingFileUtil.fullyDelete(File).FileIoProvider.getFileInputStream(FsVolumeSpi volume, File f) Create a FileInputStream usingFileInputStream(File).FileIoProvider.getFileOutputStream(FsVolumeSpi volume, File f) Create a FileOutputStream usingFileOutputStream(File, boolean).FileIoProvider.getFileOutputStream(FsVolumeSpi volume, FileDescriptor fd) Create a FileOutputStream usingFileOutputStream(FileDescriptor).FileIoProvider.getFileOutputStream(FsVolumeSpi volume, File f, boolean append) Create a FileOutputStream usingFileOutputStream(File, boolean).intFileIoProvider.getHardLinkCount(FsVolumeSpi volume, File f) Retrieves the number of links to the specified file.FileIoProvider.getRandomAccessFile(FsVolumeSpi volume, File f, String mode) Create a RandomAccessFile usingRandomAccessFile(File, String).FileIoProvider.getShareDeleteFileInputStream(FsVolumeSpi volume, File f, long offset) Create a FileInputStream usingNativeIO.getShareDeleteFileDescriptor(java.io.File, long).String[]FileIoProvider.list(FsVolumeSpi volume, File dir) Get a listing of the given directory usingFileUtil.listFiles(File).FileIoProvider.listDirectory(FsVolumeSpi volume, File dir, FilenameFilter filter) Get a listing of the given directory usingIOUtils.listDirectory(File, FilenameFilter).File[]FileIoProvider.listFiles(FsVolumeSpi volume, File dir) Get a listing of the given directory usingFileUtil.listFiles(File).booleanFileIoProvider.mkdirs(FsVolumeSpi volume, File dir) SeeFile.mkdirs().voidFileIoProvider.mkdirsWithExistsCheck(FsVolumeSpi volume, File dir) Create the target directory usingFile.mkdirs()only if it doesn't exist already.voidFileIoProvider.move(FsVolumeSpi volume, Path src, Path target, CopyOption... options) Move the src file to the target usingFiles.move(Path, Path, CopyOption...).voidFileIoProvider.moveFile(FsVolumeSpi volume, File src, File target) Move the src file to the target usingFileUtils.moveFile(File, File).voidFileIoProvider.nativeCopyFileUnbuffered(FsVolumeSpi volume, File src, File target, boolean preserveFileDate) FileIoProvider.openAndSeek(FsVolumeSpi volume, File f, long offset) Create a FileInputStream usingFileInputStream(File)and position it at the given offset.voidFileIoProvider.posixFadvise(FsVolumeSpi volume, String identifier, FileDescriptor outFd, long offset, long length, int flags) Call posix_fadvise on the given file descriptor.voidBlockScanner.removeVolumeScanner(FsVolumeSpi volume) Stops and removes a volume scanner.voidFileIoProvider.rename(FsVolumeSpi volume, File src, File target) Move the src file to the target usingStorage.rename(File, File).voidFileIoProvider.replaceFile(FsVolumeSpi volume, File src, File target) Move the src file to the target usingFileUtil.replaceFile(File, File).voidDataNode.reportBadBlocks(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi volume) Report a bad block which is hosted on the local DN.ReplicaBuilder.setFsVolume(FsVolumeSpi volume) voidFileIoProvider.sync(FsVolumeSpi volume, FileOutputStream fos) Sync the givenFileOutputStream.voidFileIoProvider.syncFileRange(FsVolumeSpi volume, FileDescriptor outFd, long offset, long numBytes, int flags) Call sync_file_range on the given file descriptor.voidFileIoProvider.transferToSocketFully(FsVolumeSpi volume, org.apache.hadoop.net.SocketOutputStream sockOut, FileChannel fileCh, long position, int count, org.apache.hadoop.io.LongWritable waitTime, org.apache.hadoop.io.LongWritable transferTime) Transfer data from a FileChannel to a SocketOutputStream.static voidLocalReplica.truncateBlock(FsVolumeSpi volume, File blockFile, File metaFile, long oldlen, long newlen, FileIoProvider fileIoProvider) Method parameters in org.apache.hadoop.hdfs.server.datanode with type arguments of type FsVolumeSpiModifier and TypeMethodDescriptionvoidDataNode.handleVolumeFailures(Set<FsVolumeSpi> unhealthyVolumes) Constructors in org.apache.hadoop.hdfs.server.datanode with parameters of type FsVolumeSpiModifierConstructorDescriptionFinalizedProvidedReplica(long blockId, URI fileURI, long fileOffset, long blockLen, long genStamp, org.apache.hadoop.fs.PathHandle pathHandle, FsVolumeSpi volume, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem remoteFS) FinalizedProvidedReplica(long blockId, org.apache.hadoop.fs.Path pathPrefix, String pathSuffix, long fileOffset, long blockLen, long genStamp, org.apache.hadoop.fs.PathHandle pathHandle, FsVolumeSpi volume, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem remoteFS) FinalizedProvidedReplica(FileRegion fileRegion, FsVolumeSpi volume, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem remoteFS) FinalizedReplica(long blockId, long len, long genStamp, FsVolumeSpi vol, File dir) Constructor.FinalizedReplica(long blockId, long len, long genStamp, FsVolumeSpi vol, File dir, byte[] checksum) Constructor.FinalizedReplica(org.apache.hadoop.hdfs.protocol.Block block, FsVolumeSpi vol, File dir) ConstructorFinalizedReplica(org.apache.hadoop.hdfs.protocol.Block block, FsVolumeSpi vol, File dir, byte[] checksum) ConstructorLocalReplicaInPipeline(long blockId, long genStamp, FsVolumeSpi vol, File dir, long bytesToReserve) Constructor for a zero length replica.ProvidedReplica(long blockId, URI fileURI, long fileOffset, long blockLen, long genStamp, org.apache.hadoop.fs.PathHandle pathHandle, FsVolumeSpi volume, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem remoteFS) Constructor.ProvidedReplica(long blockId, org.apache.hadoop.fs.Path pathPrefix, String pathSuffix, long fileOffset, long blockLen, long genStamp, org.apache.hadoop.fs.PathHandle pathHandle, FsVolumeSpi volume, org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem remoteFS) Constructor.ReplicaBeingWritten(long blockId, long len, long genStamp, FsVolumeSpi vol, File dir, Thread writer, long bytesToReserve) Constructor.ReplicaBeingWritten(long blockId, long genStamp, FsVolumeSpi vol, File dir, long bytesToReserve) Constructor for a zero length replica.ReplicaBeingWritten(org.apache.hadoop.hdfs.protocol.Block block, FsVolumeSpi vol, File dir, Thread writer) Constructor.ReplicaWaitingToBeRecovered(long blockId, long len, long genStamp, FsVolumeSpi vol, File dir) ConstructorReplicaWaitingToBeRecovered(org.apache.hadoop.hdfs.protocol.Block block, FsVolumeSpi vol, File dir) ConstructorReportCompiler(FsVolumeSpi volume) Create a report compiler for the given volume. -
Uses of FsVolumeSpi in org.apache.hadoop.hdfs.server.datanode.checker
Methods in org.apache.hadoop.hdfs.server.datanode.checker that return types with arguments of type FsVolumeSpiModifier and TypeMethodDescriptionDatasetVolumeChecker.checkAllVolumes(FsDatasetSpi<? extends FsVolumeSpi> dataset) Run checks against all volumes of a dataset.Methods in org.apache.hadoop.hdfs.server.datanode.checker with parameters of type FsVolumeSpiModifier and TypeMethodDescriptionbooleanDatasetVolumeChecker.checkVolume(FsVolumeSpi volume, DatasetVolumeChecker.Callback callback) Check a single volume asynchronously, returning aListenableFuturethat can be used to retrieve the final result.Method parameters in org.apache.hadoop.hdfs.server.datanode.checker with type arguments of type FsVolumeSpiModifier and TypeMethodDescriptionvoidDatasetVolumeChecker.Callback.call(Set<FsVolumeSpi> healthyVolumes, Set<FsVolumeSpi> failedVolumes) DatasetVolumeChecker.checkAllVolumes(FsDatasetSpi<? extends FsVolumeSpi> dataset) Run checks against all volumes of a dataset. -
Uses of FsVolumeSpi in org.apache.hadoop.hdfs.server.datanode.fsdataset
Classes in org.apache.hadoop.hdfs.server.datanode.fsdataset with type parameters of type FsVolumeSpiModifier and TypeClassDescriptionclassAvailableSpaceVolumeChoosingPolicy<V extends FsVolumeSpi>A DN volume choosing policy which takes into account the amount of free space on each of the available volumes when considering where to assign a new replica allocation.interfaceFsDatasetSpi<V extends FsVolumeSpi>This is a service provider interface for the underlying storage that stores replicas for a data node.classRoundRobinVolumeChoosingPolicy<V extends FsVolumeSpi>Choose volumes with the same storage type in round-robin order.interfaceVolumeChoosingPolicy<V extends FsVolumeSpi>This interface specifies the policy for choosing volumes to store replicas.Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset that return FsVolumeSpiModifier and TypeMethodDescriptionFsDatasetSpi.FsVolumeReferences.get(int index) Get the volume for a given index.FsVolumeReference.getVolume()Returns the underlying volume object.FsVolumeSpi.ScanInfo.getVolume()Returns the volume that contains the block that this object describes.Methods in org.apache.hadoop.hdfs.server.datanode.fsdataset that return types with arguments of type FsVolumeSpiMethods in org.apache.hadoop.hdfs.server.datanode.fsdataset with parameters of type FsVolumeSpiModifier and TypeMethodDescriptionFsDatasetSpi.moveBlockAcrossVolumes(org.apache.hadoop.hdfs.protocol.ExtendedBlock block, FsVolumeSpi destination) Moves a given block from one volume to another volume.Method parameters in org.apache.hadoop.hdfs.server.datanode.fsdataset with type arguments of type FsVolumeSpiModifier and TypeMethodDescriptionvoidFsDatasetSpi.handleVolumeFailures(Set<FsVolumeSpi> failedVolumes) Check if all the data directories are healthyConstructors in org.apache.hadoop.hdfs.server.datanode.fsdataset with parameters of type FsVolumeSpiModifierConstructorDescriptionReplicaOutputStreams(OutputStream dataOut, OutputStream checksumOut, org.apache.hadoop.util.DataChecksum checksum, FsVolumeSpi volume, FileIoProvider fileIoProvider) Create an object with a data output stream, a checksum output stream and a checksum.ScanInfo(long blockId, File basePath, String blockFile, String metaFile, FsVolumeSpi vol) Create a ScanInfo object for a block.ScanInfo(long blockId, FsVolumeSpi vol, FileRegion fileRegion, long length) Create a ScanInfo object for a block. -
Uses of FsVolumeSpi in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl
Classes in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl that implement FsVolumeSpiMethods in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl that return types with arguments of type FsVolumeSpiModifier and TypeMethodDescriptionFsDatasetSpi<? extends FsVolumeSpi>FsVolumeImpl.getDataset()AddBlockPoolException.getFailingVolumes()Constructor parameters in org.apache.hadoop.hdfs.server.datanode.fsdataset.impl with type arguments of type FsVolumeSpiModifierConstructorDescriptionAddBlockPoolException(Map<FsVolumeSpi, IOException> unhealthyDataDirs)