Package org.apache.hadoop.hdfs
Class StripedDataStreamer
java.lang.Object
java.lang.Thread
org.apache.hadoop.util.Daemon
org.apache.hadoop.hdfs.StripedDataStreamer
- All Implemented Interfaces:
Runnable
@Private
public class StripedDataStreamer
extends org.apache.hadoop.util.Daemon
This class extends
DataStreamer to support writing striped blocks
to datanodes.
A DFSStripedOutputStream has multiple StripedDataStreamers.
Whenever the streamers need to talk the namenode, only the fastest streamer
sends an rpc call to the namenode and then populates the result for the
other streamers.-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.util.Daemon
org.apache.hadoop.util.Daemon.DaemonFactoryNested classes/interfaces inherited from class java.lang.Thread
Thread.State, Thread.UncaughtExceptionHandler -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected org.apache.hadoop.security.token.Token<BlockTokenIdentifier>protected final org.apache.hadoop.hdfs.DataStreamer.BlockToWriteprotected longprotected final LinkedList<DFSPacket>protected final DFSClientprotected final org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> protected final Stringprotected final HdfsFileStatusFields inherited from class java.lang.Thread
MAX_PRIORITY, MIN_PRIORITY, NORM_PRIORITY -
Method Summary
Modifier and TypeMethodDescriptionprotected voidendBlock()protected voidsetPipeline(DatanodeInfo[] newNodes, org.apache.hadoop.fs.StorageType[] newStorageTypes, String[] newStorageIDs) protected voidprotected voidOpen a DataStreamer to a DataNode so that it can be written to.protected booleansetupPipelineInternal(DatanodeInfo[] nodes, org.apache.hadoop.fs.StorageType[] nodeStorageTypes, String[] nodeStorageIDs) toString()voidupdatePipeline(long newGS) update pipeline at the namenodevoidwork()Methods inherited from class org.apache.hadoop.util.Daemon
getRunnable, run, startMethods inherited from class java.lang.Thread
activeCount, checkAccess, clone, countStackFrames, currentThread, dumpStack, enumerate, getAllStackTraces, getContextClassLoader, getDefaultUncaughtExceptionHandler, getId, getName, getPriority, getStackTrace, getState, getThreadGroup, getUncaughtExceptionHandler, holdsLock, interrupt, interrupted, isAlive, isDaemon, isInterrupted, join, join, join, onSpinWait, resume, setContextClassLoader, setDaemon, setDefaultUncaughtExceptionHandler, setName, setPriority, setUncaughtExceptionHandler, sleep, sleep, stop, suspend, yield
-
Field Details
-
block
protected final org.apache.hadoop.hdfs.DataStreamer.BlockToWrite block -
accessToken
-
bytesSent
protected long bytesSent -
dfsClient
-
src
-
stat
-
dataQueue
-
excludedNodes
protected final org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> excludedNodes
-
-
Method Details
-
endBlock
protected void endBlock() -
setupPipelineForCreate
Open a DataStreamer to a DataNode so that it can be written to. This happens when a file is created and each time a new block is allocated. Must get block ID and the IDs of the destinations from the namenode. Returns the list of target datanodes.- Throws:
IOException
-
setupPipelineInternal
protected boolean setupPipelineInternal(DatanodeInfo[] nodes, org.apache.hadoop.fs.StorageType[] nodeStorageTypes, String[] nodeStorageIDs) throws IOException - Throws:
IOException
-
toString
-
setPipeline
-
setPipeline
protected void setPipeline(DatanodeInfo[] newNodes, org.apache.hadoop.fs.StorageType[] newStorageTypes, String[] newStorageIDs) -
work
public void work()- Overrides:
workin classorg.apache.hadoop.util.Daemon
-
updatePipeline
update pipeline at the namenode- Throws:
IOException
-