Bigdata Dev Apache Hadoop Resourcemanager

Channel Revision Published Runs on
latest/stable 11 18 Mar 2021
Ubuntu 14.04
latest/edge 5 18 Mar 2021
Ubuntu 14.04
juju deploy bigdata-dev-apache-hadoop-resourcemanager
Show information

Platform:

Ubuntu
14.04

Learn about actions >

  • downgrade

    Downgrade this unit

  • finalize

    Finalize and commit an in-progress upgrade

  • list-versions

    list available versions of hadoop

  • mrbench

    Mapreduce benchmark for small jobs

    Params
    • basedir string

      DFS working directory

    • inputlines integer

      number of input lines to generate

    • inputtype string

      Type of input to generate, one of [ascending, descending, random]

    • maps integer

      number of maps for each run

    • numruns integer

      Number of times to run the job

    • reduces integer

      number of reduces for each run

  • prepare-upgrade

    Prepare HDFS for an upgrade

  • restart-yarn

    All of the YARN processes can be restarted with this Juju action.

  • smoke-test

    Verify that YARN is working as expected by running a small (1MB) terasort.

  • start-yarn

    All of the YARN processes can be started with this Juju action.

  • stop-yarn

    All of the YARN processes can be stopped with this Juju action.

  • teragen

    Generate data with teragen

    Params
    • indir string

      HDFS directory where generated data is stored

    • size integer

      The number of 100 byte rows, default to 1GB of data to generate

  • terasort

    Runs teragen to generate sample data, and then runs terasort to sort that data

    Params
    • compression string

      Enable or Disable mapred output (intermediate) compression. LocalDefault will run with your current local hadoop configuration. Default means default hadoop deflate codec. One of: Gzip, BZip2, Snappy, Lzo, Default, Disable, LocalDefault These are all case sensitive.

    • indir string

      HDFS directory where generated data is stored

    • maps integer

      The default number of map tasks per job. 1-20

    • numtasks integer

      How many tasks to run per jvm. If set to -1, there is no limit.

    • outdir string

      HDFS directory where sorted data is stored

    • reduces integer

      The default number of reduce tasks per job. Typically set to 99% of the cluster's reduce capacity, so that if a node fails the reduces can still be executed in a single wave. Try 1-20

    • size integer

      The number of 100 byte rows, default to 1GB of data to generate and sort

  • upgrade

    Upgrade this unit