druid hadoop #1

15 machines, 16 units

Overview

At Spicule we love to build big data platforms. This is our reference bundle for Apache Druid and Apache Big Top. If you spin up this bundle you'll get a fully working Druid deployment with Hadoop processing backend.

This bundle will run in both Public and Private clouds to give you blazing fast analytics over huge amounts of data.

Bundle Composition

  • Hadoop Worker x3
  • Hadoop Namenode
  • Hadoop Resourcemanger
  • Hadoop Clinet
  • Druid Config
  • Druid Coordinator
  • Druid Overlord
  • Druid Historical
  • Druid Middlemanager
  • Zookeeper x3
  • MySQL

Deploying

To deploy this stack you can simple push the deploy button at the top of the page or run:

juju deploy ~spiculecharm/druid-bundle

This will spin up 15 machines and deploy the 15 components to their respective machines. Deployment time varies on Cloud and network perfomance but usually takes about 20 minutes until you have a full operational and scalable Druid Hadoop platform.

Verifying

To check all the components have deploy successfully you can check the Status tab in the Juju GUI or run:

juju status

And ensure none of the units are reporting an error state.

Monitoring

Scaling

To scale units you can do so by selecting the charm in the GUI and then in the menu on the left, select the units and input the amout of extra units you require. Or you can run:

 juju add-unit -n 1 <charm name>

Where 1 is the number of new units you want and is the name of the charm you want to scale.

Issues

Contact Information

You can get help and support for this bundle from:

info@anssr.io

Resources

Bundle configuration