Package org.apache.hadoop.mapred
Class LocalJobRunner
java.lang.Object
org.apache.hadoop.mapred.LocalJobRunner
- All Implemented Interfaces:
org.apache.hadoop.ipc.VersionedProtocol,org.apache.hadoop.mapreduce.protocol.ClientProtocol
@Private
@Unstable
public class LocalJobRunner
extends Object
implements org.apache.hadoop.mapreduce.protocol.ClientProtocol
Implements MapReduce locally, in-process, for debugging.
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final Stringstatic final StringThe maximum number of map tasks to run in parallel in LocalJobRunnerstatic final StringThe maximum number of reduce tasks to run in parallel in LocalJobRunnerstatic final org.slf4j.LoggerFields inherited from interface org.apache.hadoop.mapreduce.protocol.ClientProtocol
versionID -
Constructor Summary
ConstructorsConstructorDescriptionLocalJobRunner(org.apache.hadoop.conf.Configuration conf) LocalJobRunner(org.apache.hadoop.mapred.JobConf conf) Deprecated. -
Method Summary
Modifier and TypeMethodDescriptionvoidcancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier> token) org.apache.hadoop.mapreduce.TaskTrackerInfo[]Get all active trackers in cluster.org.apache.hadoop.mapreduce.JobStatus[]org.apache.hadoop.mapreduce.TaskTrackerInfo[]Get all blacklisted trackers in cluster.org.apache.hadoop.mapreduce.QueueInfo[]getChildQueues(String queueName) org.apache.hadoop.mapreduce.ClusterMetricsorg.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier>getDelegationToken(org.apache.hadoop.io.Text renewer) org.apache.hadoop.mapreduce.CountersgetJobCounters(org.apache.hadoop.mapreduce.JobID id) org.apache.hadoop.mapreduce.JobStatusgetJobStatus(org.apache.hadoop.mapreduce.JobID id) org.apache.hadoop.mapreduce.Cluster.JobTrackerStatusstatic intgetLocalMaxRunningMaps(org.apache.hadoop.mapreduce.JobContext job) static intgetLocalMaxRunningReduces(org.apache.hadoop.mapreduce.JobContext job) org.apache.hadoop.mapreduce.v2.LogParamsgetLogFileParams(org.apache.hadoop.mapreduce.JobID jobID, org.apache.hadoop.mapreduce.TaskAttemptID taskAttemptID) org.apache.hadoop.mapreduce.JobIDorg.apache.hadoop.ipc.ProtocolSignaturegetProtocolSignature(String protocol, long clientVersion, int clientMethodsHash) longgetProtocolVersion(String protocol, long clientVersion) org.apache.hadoop.mapreduce.QueueInfoorg.apache.hadoop.mapreduce.QueueAclsInfo[]org.apache.hadoop.security.authorize.AccessControlListgetQueueAdmins(String queueName) org.apache.hadoop.mapreduce.QueueInfo[]org.apache.hadoop.mapreduce.QueueInfo[]org.apache.hadoop.mapreduce.TaskCompletionEvent[]getTaskCompletionEvents(org.apache.hadoop.mapreduce.JobID jobid, int fromEventId, int maxEvents) String[]getTaskDiagnostics(org.apache.hadoop.mapreduce.TaskAttemptID taskid) Returns the diagnostic information for a particular task in the given job.org.apache.hadoop.mapreduce.TaskReport[]getTaskReports(org.apache.hadoop.mapreduce.JobID id, org.apache.hadoop.mapreduce.TaskType type) longvoidkillJob(org.apache.hadoop.mapreduce.JobID id) booleankillTask(org.apache.hadoop.mapreduce.TaskAttemptID taskId, boolean shouldFail) longrenewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier> token) voidsetJobPriority(org.apache.hadoop.mapreduce.JobID id, String jp) static voidsetLocalMaxRunningMaps(org.apache.hadoop.mapreduce.JobContext job, int maxMaps) Set the max number of map tasks to run concurrently in the LocalJobRunner.static voidsetLocalMaxRunningReduces(org.apache.hadoop.mapreduce.JobContext job, int maxReduces) Set the max number of reduce tasks to run concurrently in the LocalJobRunner.org.apache.hadoop.mapreduce.JobStatussubmitJob(org.apache.hadoop.mapreduce.JobID jobid, String jobSubmitDir, org.apache.hadoop.security.Credentials credentials)
-
Field Details
-
LOG
public static final org.slf4j.Logger LOG -
LOCAL_MAX_MAPS
The maximum number of map tasks to run in parallel in LocalJobRunner- See Also:
-
LOCAL_MAX_REDUCES
The maximum number of reduce tasks to run in parallel in LocalJobRunner- See Also:
-
INTERMEDIATE_DATA_ENCRYPTION_ALGO
- See Also:
-
-
Constructor Details
-
LocalJobRunner
- Throws:
IOException
-
LocalJobRunner
Deprecated.- Throws:
IOException
-
-
Method Details
-
getProtocolVersion
- Specified by:
getProtocolVersionin interfaceorg.apache.hadoop.ipc.VersionedProtocol
-
getProtocolSignature
public org.apache.hadoop.ipc.ProtocolSignature getProtocolSignature(String protocol, long clientVersion, int clientMethodsHash) throws IOException - Specified by:
getProtocolSignaturein interfaceorg.apache.hadoop.ipc.VersionedProtocol- Throws:
IOException
-
getNewJobID
public org.apache.hadoop.mapreduce.JobID getNewJobID()- Specified by:
getNewJobIDin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
submitJob
public org.apache.hadoop.mapreduce.JobStatus submitJob(org.apache.hadoop.mapreduce.JobID jobid, String jobSubmitDir, org.apache.hadoop.security.Credentials credentials) throws IOException - Specified by:
submitJobin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
killJob
public void killJob(org.apache.hadoop.mapreduce.JobID id) - Specified by:
killJobin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
setJobPriority
- Specified by:
setJobPriorityin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
killTask
public boolean killTask(org.apache.hadoop.mapreduce.TaskAttemptID taskId, boolean shouldFail) throws IOException - Specified by:
killTaskin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getTaskReports
public org.apache.hadoop.mapreduce.TaskReport[] getTaskReports(org.apache.hadoop.mapreduce.JobID id, org.apache.hadoop.mapreduce.TaskType type) - Specified by:
getTaskReportsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getJobStatus
public org.apache.hadoop.mapreduce.JobStatus getJobStatus(org.apache.hadoop.mapreduce.JobID id) - Specified by:
getJobStatusin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getJobCounters
public org.apache.hadoop.mapreduce.Counters getJobCounters(org.apache.hadoop.mapreduce.JobID id) - Specified by:
getJobCountersin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getFilesystemName
- Specified by:
getFilesystemNamein interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getClusterMetrics
public org.apache.hadoop.mapreduce.ClusterMetrics getClusterMetrics()- Specified by:
getClusterMetricsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getJobTrackerStatus
public org.apache.hadoop.mapreduce.Cluster.JobTrackerStatus getJobTrackerStatus()- Specified by:
getJobTrackerStatusin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getTaskTrackerExpiryInterval
- Specified by:
getTaskTrackerExpiryIntervalin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOExceptionInterruptedException
-
getActiveTrackers
public org.apache.hadoop.mapreduce.TaskTrackerInfo[] getActiveTrackers() throws IOException, InterruptedExceptionGet all active trackers in cluster.- Specified by:
getActiveTrackersin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Returns:
- array of TaskTrackerInfo
- Throws:
IOExceptionInterruptedException
-
getBlacklistedTrackers
public org.apache.hadoop.mapreduce.TaskTrackerInfo[] getBlacklistedTrackers() throws IOException, InterruptedExceptionGet all blacklisted trackers in cluster.- Specified by:
getBlacklistedTrackersin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Returns:
- array of TaskTrackerInfo
- Throws:
IOExceptionInterruptedException
-
getTaskCompletionEvents
public org.apache.hadoop.mapreduce.TaskCompletionEvent[] getTaskCompletionEvents(org.apache.hadoop.mapreduce.JobID jobid, int fromEventId, int maxEvents) throws IOException - Specified by:
getTaskCompletionEventsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getAllJobs
public org.apache.hadoop.mapreduce.JobStatus[] getAllJobs()- Specified by:
getAllJobsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getTaskDiagnostics
public String[] getTaskDiagnostics(org.apache.hadoop.mapreduce.TaskAttemptID taskid) throws IOException Returns the diagnostic information for a particular task in the given job. To be implemented- Specified by:
getTaskDiagnosticsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getSystemDir
- Specified by:
getSystemDirin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- See Also:
-
ClientProtocol.getSystemDir()
-
getQueueAdmins
public org.apache.hadoop.security.authorize.AccessControlList getQueueAdmins(String queueName) throws IOException - Specified by:
getQueueAdminsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException- See Also:
-
ClientProtocol.getQueueAdmins(String)
-
getStagingAreaDir
- Specified by:
getStagingAreaDirin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException- See Also:
-
ClientProtocol.getStagingAreaDir()
-
getJobHistoryDir
- Specified by:
getJobHistoryDirin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol
-
getChildQueues
- Specified by:
getChildQueuesin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getRootQueues
- Specified by:
getRootQueuesin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getQueues
- Specified by:
getQueuesin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getQueue
- Specified by:
getQueuein interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
getQueueAclsForCurrentUser
- Specified by:
getQueueAclsForCurrentUserin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOException
-
setLocalMaxRunningMaps
public static void setLocalMaxRunningMaps(org.apache.hadoop.mapreduce.JobContext job, int maxMaps) Set the max number of map tasks to run concurrently in the LocalJobRunner.- Parameters:
job- the job to configuremaxMaps- the maximum number of map tasks to allow.
-
getLocalMaxRunningMaps
public static int getLocalMaxRunningMaps(org.apache.hadoop.mapreduce.JobContext job) - Returns:
- the max number of map tasks to run concurrently in the LocalJobRunner.
-
setLocalMaxRunningReduces
public static void setLocalMaxRunningReduces(org.apache.hadoop.mapreduce.JobContext job, int maxReduces) Set the max number of reduce tasks to run concurrently in the LocalJobRunner.- Parameters:
job- the job to configuremaxReduces- the maximum number of reduce tasks to allow.
-
getLocalMaxRunningReduces
public static int getLocalMaxRunningReduces(org.apache.hadoop.mapreduce.JobContext job) - Returns:
- the max number of reduce tasks to run concurrently in the LocalJobRunner.
-
cancelDelegationToken
public void cancelDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier> token) throws IOException, InterruptedException - Specified by:
cancelDelegationTokenin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOExceptionInterruptedException
-
getDelegationToken
public org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier> getDelegationToken(org.apache.hadoop.io.Text renewer) throws IOException, InterruptedException - Specified by:
getDelegationTokenin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOExceptionInterruptedException
-
renewDelegationToken
public long renewDelegationToken(org.apache.hadoop.security.token.Token<org.apache.hadoop.mapreduce.security.token.delegation.DelegationTokenIdentifier> token) throws IOException, InterruptedException - Specified by:
renewDelegationTokenin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOExceptionInterruptedException
-
getLogFileParams
public org.apache.hadoop.mapreduce.v2.LogParams getLogFileParams(org.apache.hadoop.mapreduce.JobID jobID, org.apache.hadoop.mapreduce.TaskAttemptID taskAttemptID) throws IOException, InterruptedException - Specified by:
getLogFileParamsin interfaceorg.apache.hadoop.mapreduce.protocol.ClientProtocol- Throws:
IOExceptionInterruptedException
-