Bigdata Dev Apache Hadoop Namenode
- By Juju Big Data Development
- Big Data
Channel | Revision | Published | Runs on |
---|---|---|---|
latest/stable | 9 | 18 Mar 2021 | |
latest/edge | 4 | 18 Mar 2021 |
juju deploy bigdata-dev-apache-hadoop-namenode
Deploy universal operators easily with Juju, the Universal Operator Lifecycle Manager.
Platform:
-
downgrade
Downgrade this unit
-
finalize
Finalize and commit an in-progress upgrade
-
ingest
Ingest a file from a URL into HDFS.
- Params
-
dest_dir
The destination directory in HDFS to put the file(s). If it, or any of its parents don't exist, they will be created. The name of the file itself will be taken from the URL.
-
url string
One or more (space separated) file URLs to ingest into HDFS.
- Required
url
-
list-versions
list available versions of hadoop
-
nnbench
Load test the NameNode hardware and configuration
- Params
-
basedir string
DFS working directory with hostname automatically appended
-
blocksize integer
block size
-
bytes integer
bytes to write
-
maps integer
number of map jobs
-
numfiles integer
number of files
-
reduces integer
number of reduces
-
repfactor integer
replication factor per file
-
prepare-upgrade
Prepare HDFS for an upgrade
-
restart-hdfs
All of the HDFS processes can be restarted with this Juju action.
-
smoke-test
Verify that HDFS is working by creating and removing a small file.
-
start-hdfs
All of the HDFS processes can be started with this Juju action.
-
start-namenode
Start the NameNode service on this unit
-
stop-hdfs
All of the HDFS processes can be stopped with this Juju action.
-
stop-namenode
Stop the NameNode service on this unit
-
testdfsio
DFS IO Testing
- Params
-
buffersize integer
Buffer size in bytes
-
filesize integer
filesize in MB
-
mode string
read or write IO test
-
numfiles integer
number of files
-
resfile string
Results file name
-
upgrade
Upgrade this unit