task table
Retrieves histograms and line charts for task metrics.
A Hadoop job sets the rules that the JobTracker service uses to break an input data set
into discrete tasks and assign those tasks to individual nodes. Use the task
table
to retrieve task analytics about the jobs running on your cluster. The task
metric data includes information about the tasks that make up a specific job, as well as
the specific task attempts. The job metric data includes a task attempt's data
throughput, measured in number of records per second as well as in bytes per second. The
metrics data can be formatted for histogram display or line chart display. In order to
issue the task table
command, the mapr-metrics package must be
installed on all the nodes where webserver and jobtracker are configured to run.
Syntax
- REST
-
http[s]://<host>:<port>/rest/task/table?output=terse&filter=string&chart=chart_type&columns=list_of_columns&scale=scale_type<parameters>
Parameters
Parameter |
Description |
---|---|
filter |
Filters results to match the value of a specified string. |
chart |
Chart type to use: |
columns |
Comma-separated list of column names to return. |
bincount |
Number of histogram bins. |
scale |
Scale to use for the histogram. Specify |
Column Names
The following table lists the terse short names for particular metrics regarding task attempts.
Parameter |
Description |
---|---|
|
Combine Task Attempt Input Records |
|
Combine Task Attempt Output Records |
|
Map Task Attempt Input Bytes |
|
Map Task Attempt Input Records |
|
Map Task Attempt Output Bytes |
|
Map Task Attempt Output Records |
|
Map Task Attempt Skipped Records |
|
Reduce Task Attempt Input Groups |
|
Reduce Task Attempt Input Records |
|
Reduce Task Attempt Output Records |
|
Reduce Task Attempt Shuffle Bytes |
|
Reduce Task Attempt Skipped Records |
|
Task Attempt CPU Time |
|
Task Attempt Local Bytes Read |
|
Task Attempt Local Bytes Written |
|
Task Attempt MapR-FS Bytes Read |
|
Task Attempt MapR-FS Bytes Written |
|
Task Attempt Physical Memory Bytes |
|
Task Attempt Spilled Records |
|
Task Attempt Virtual Memory Bytes |
|
Task Attempt Duration (histogram only) |
|
Task Attempt Garbage Collection Time (histogram only) |
|
Task Duration (histogram only) |
|
Task Attempt ID (filter only) |
|
Task Attempt Type (filter only) |
|
Task Attempt Status (filter only) |
|
Task Attempt Progress (filter only) |
|
Task Attempt Start Time (filter only) |
|
Task Attempt Finish Time (filter only) |
|
Task Attempt Shuffle End |
|
Task Attempt Sort End |
|
Task Attempt Host Location |
|
Location of logs, |
|
Freeform information about this task attempt used for diagnosing behaviors. |
|
Reduce Task Attempt Skipped Groups (filter only) |
|
Reduce Task Attempt Shuffle Bytes |
|
Map Task Attempt Input Records per Second |
|
Reduce Task Attempt Input Records per Second |
|
Map Task Attempt Output Records per Second |
|
Reduce Task Attempt Output Records per Second |
|
Map Task Attempt Input Bytes per Second |
|
Map Output Bytes per Second |
|
Reduce Task Attempt Shuffle Bytes per Second |
|
Task Status (filter only) |
|
Task Duration |
|
Task Type (filter only) |
|
Primary Task Attempt ID (filter only) |
|
Task Start Time (filter only) |
|
Task End Time (filter only) |
|
Task Host Location (filter only) |
|
Task Host Locality (filter only) |
Example
Retrieve a Task Histogram:
- REST
-
https://r1n1.sj.us:8443/rest/task/table?chart=bar&filter=%5Btt!=JOB_SETUP%5Dand%5Btt!=JOB_CLEANUP%5Dand%5Bjid==job_201129649560_3390%5D&columns=td&bincount=28&scale=log
- CURL
-
curl -d @json https://r1n1.sj.us:8443/api/task/table
In the curl
example above, the json
file contains a
URL-encoded version of the information in the Request section
below.
Request
GENERAL_PARAMS:
{
[chart: "bar"|"line"],
columns: <comma-separated list of column terse names>,
[filter: "[<terse_field>{operator}<value>]and[...]",]
[output: terse,]
[start: int,]
[limit: int]
}
REQUEST_PARAMS_HISTOGRAM:
{
chart:bar
columns:td
filter: <anything>
}
REQUEST_PARAMS_LINE:
{
chart:line,
columns:tapmem,
filter: NOT PARSED, UNUSED IN BACKEND
}
REQUEST_PARAMS_GRID:
{
columns:tid,tt,tsta,tst,tft
filter:<any real filter expression>
output:terse,
start:0,
limit:50
}
Response
RESPONSE_SUCCESS_HISTOGRAM:
{
"status" : "OK",
"total" : 15,
"columns" : ["td"],
"binlabels" : ["0-5s","5-10s","10-30s","30-60s","60-90s","90s-2m","2m-5m","5m-10m","10m-30m","30m-1h","1h-2h","2h-6h","6h-12h","12h-24h",">24h"],
"binranges" : [
[0,5000],
[5000,10000],
[10000,30000],
[30000,60000],
[60000,90000],
[90000,120000],
[120000,300000],
[300000,600000],
[600000,1800000],
[1800000,3600000],
[3600000,7200000],
[7200000,21600000],
[21600000,43200000],
[43200000,86400000],
[86400000]
],
"data" : [33,919,1,133,9820,972,39,2,44,80,11,93,31,0,0]
}
RESPONSE_SUCCESS_GRID:
{
"status": "OK",
"total" : 67,
"columns" : ["ts","tid","tt","tsta","tst","tft","td","th","thl"],
"data" : [
["FAILED","task_201204837529_1284_9497_4858","REDUCE","attempt_201204837529_1284_9497_4858_3680",
1301066803229,1322663797292,21596994063,"newyork-rack00-8","remote"],
["PENDING","task_201204837529_1284_9497_4858","MAP","attempt_201204837529_1284_9497_4858_8178",
1334918721349,1341383566992,6464845643,"newyork-rack00-7","unknown"],
["RUNNING","task_201204837529_1284_9497_4858","JOB_CLEANUP","attempt_201204837529_1284_9497_4858_1954",
1335088225728,1335489232319,401006591,"newyork-rack00-8","local"],
]}
RESPONSE_SUCCESS_LINE:
{
"status" : "OK",
"total" : 22,
"columns" : ["tapmem"],
"data" : [
[1329891055016,0],
[1329891060016,8],
[1329891065016,16],
[1329891070016,1024],
[1329891075016,2310],
[1329891080016,3243],
[1329891085016,4345],
[1329891090016,7345],
[1329891095016,7657],
[1329891100016,8758],
[1329891105016,9466],
[1329891110016,10345],
[1329891115016,235030],
[1329891120016,235897],
[1329891125016,287290],
[1329891130016,298390],
[1329891135016,301355],
[1329891140016,302984],
[1329891145016,303985],
[1329891150016,304403],
[1329891155016,503030],
[1329891160016,983038]
]
}