15 machines, 16 units
At Spicule we love to build big data platforms. This is our reference bundle for Apache Druid and Apache Big Top. If you spin up this bundle you'll get a fully working Druid deployment with Hadoop processing backend.
This bundle will run in both Public and Private clouds to give you blazing fast analytics over huge amounts of data.
To deploy this stack you can simple push the deploy button at the top of the page or run:
juju deploy ~spiculecharm/druid-bundle
This will spin up 15 machines and deploy the 15 components to their respective machines. Deployment time varies on Cloud and network perfomance but usually takes about 20 minutes until you have a full operational and scalable Druid Hadoop platform.
To check all the components have deploy successfully you can check the Status tab in the Juju GUI or run:
And ensure none of the units are reporting an error state.
To scale units you can do so by selecting the charm in the GUI and then in the menu on the left, select the units and input the amout of extra units you require. Or you can run:
juju add-unit -n 1 <charm name>
Where 1 is the number of new units you want and is the name of the charm you want to scale.
You can get help and support for this bundle from: