export BLC_REGISTRY_USER=<my_broadleaf_nexus_user> export BLC_REGISTRY_PASSWORD=<my_broadleaf_nexus_password> export BLC_REGISTRY_EMAIL=<my_broadleaf_nexus_email>
The following tutorial will walk through the steps necessary to deploy an Initializr generated starter project to an empty Kubernetes cluster.
Important
|
The following requires ENTERPRISE credentials and access to various Broadleaf resources outlined in this guide: Getting Started Locally. Please read this section and have the appropriate local environment setup configured before proceeding. Certain plugins and dependencies referenced here are not available with an EVALUATION Version. Please reach out to us if you have any additional questions. |
To begin with, you’ll need a broadleaf generated project structure as a basis for the deployment process. The tutorials below assume some default dependencies are chosen (which will work in conjunction with the generated helm charts referenced in this guide).
Generate an Initializr Project from: https://start.broadleafcommerce.com
Choose the latest stable
release channel
Choose Balanced
as a deployment flex package (Note: this is our recommended deployment for most Enterprises)
Leave the default dependencies in place (Postgresql
, Cloud Stream Kafka
, Cloud Config
)
Click the Checkbox for Would you like to enable Demo Data?
Click Generate
which should download a manifest.zip
file to your local machine.
Unzip the manifest.zip
into a folder on your local machine and cd
into the manifest
directory
Make sure you’ve set up the appropriate Broadleaf credentials as defined in the getting started guide here
In the manifest
folder, run ./mvnw clean install flex:generate
In the manifest
folder, run ./mvnw helm:generate -DdemoImages=true
After running this command, you should see a generated Broadleaf Microservices project
structure in the same directory as the manifest
folder.
The following scripts used in this tutorial rely on pulling resources and artifacts from Broadleaf’s Nexus. It’s recommended that you export the following ENV
properties with your specific Broadleaf Credentials that you were given. Otherwise, you will need to pass these in with every execution of the referenced scripts below.
Configure credentials to Broadleaf Nexus in your .bashrc
or .zshrc
:
export BLC_REGISTRY_USER=<my_broadleaf_nexus_user> export BLC_REGISTRY_PASSWORD=<my_broadleaf_nexus_password> export BLC_REGISTRY_EMAIL=<my_broadleaf_nexus_email>
At this point, you should have a locally working Broadleaf Microservices project structure with all the necessary components for a deployment to Kubernetes.
This project includes all the cryptographically unique security artifacts
and the related encrypted configuration properties located in both the security
and config
folders respectively (this was generated when running the flex:generate
plugin). The contents of these folders are important and uniquely tie all the components of a Broadleaf installation together. Keep note of these folders in particular, as you will be using and installing these artifacts to your Kubernetes cluster in the following tutorial.
Noteif for some reason the contents of your security
andconfig
folders get out of sync, you can always delete them and runflex:generate
again to re-generate a full new set of unique security artifacts or just runflex:generate
when you’ve updated / added new secrets to your credentials report to synchronize the configuration properties
Do NOT check the contents of the security
folder into any source control. Please review the README
in that folder and this security overview guide for more details on the contents of this directory. From an operational perspective, we recommend storing these artifacts in a secure vault location. (e.g. Google Secret Manager, Azure Key Vault, Hashicorp Vault, etc..)
NoteFor simplicity and for the purposes of this tutorial, we will not require a Secure Vault, instead we will be passing the contents of these security directories directly to the installation script from your local machine. From a CI/CD perspective, we recommend managing these in a secure vault and having your pipeline harvest these when creating resources on your cluster.
This folder contains both secure
and insecure
Spring properties which are meant to be loaded into Broadleaf’s Config Service (a distributed configuration service built on Spring Cloud Config). Please review the README
in the config
folder as well as our documentation around Environment Configuration for more details.
Spring Cloud Config supports multiple backing repositories to store these properties. When setting up a real environment, we recommend (and is also Spring’s default implementation) following a GitOps approach by creating one or more Git backends to store both the secure and insecure properties. It’s important to note that the secure properties are encrypted with unique keys (which are located in the security
folder). From a best practice perspective, we would recommend creating a separate Git repo to store the secure
properties and another repo to store the insecure
properties. This way you can restrict access to each repo to certain users that need the ability to manage those types of properties.
NoteFor simplicity and for the purposes of this tutorial, we will not require the creation of new Git repositories to store these properties, but instead pass the contents of the config
folder directly to the installation script. For this tutorial, we will use afile-based
Config Server backend (instead ofGit
) purely for simplicity
Additionally, you will also see a helm
folder that contains baseline
helm charts that describe your generated project structure deployment footprint as well as utility installation scripts to deploy these charts to an empty K8 cluster. The Helm Charts generated by the helm:generate
plugin are meant to be a baseline starting point. The expectation is for you to adapt them to meet the organizational standards and processes that are specific to your implementation.
NoteThe helm charts for this tutorial will reference pre-built "demo" docker images (i.e. enabled by using the flag -DdemoImages=true
when invoking thehelm:generate
plugin). When setting up a project for a real implementation, you will want to integrate your own CI/CD processes (or leverage Broadleaf Cloud services) to build your own managed docker images for the custom services you generated in this initializr project.
Because we recommend that most enterprises start by deploying the Balanced
Flex Package, this tutorial and most of the examples you will see on this developer portal will be catered to deploying the Balanced
Flex Package composition by default.
If you’re unfamiliar with the topology of the Balanced
Flex Package, we recommend reviewing this High Level Architecture Overview describing the specifics of a Balanced
composition.
It may also be helpful to review this example topology map to get a visual reference of the microservices footprint being deployed to Kubernetes:
For this tutorial you will need:
NoteIf you already had minikube
installed and have previously created a minikube docker image locally, you may need to delete and re-create a new instance using this process. The default local installation process for this tutorial "mounts" an external host path to your current working directory in the docker container. This "mount" is built into the image and can’t be modified once the docker minikube image is initially created, so if you want to change or add a mount path, you’ll want to delete your current instance of minikube by running:minikube delete
before running the below steps (otherwise you may get an error runningminikube-start.sh
saying that you can’t change mount paths:"Exiting due to GUEST_MOUNT_CONFLICT: Sorry, docker does not allow mounts to be changed after container creation"
)
/etc/hosts
)The local installation process using minikube
and the associated helm charts
leverages the ingress
addon and works in conjunction with static domains
that are configured for the Broadleaf applications.
In order for this to access the applications properly, you will have to update your /etc/hosts
file with the following (depending on your system platform) host entries.
Setting up ingress on minikube for MacOS doesn’t work the same as it does for minikube on Ubuntu/Linux. On MacOS, you must have your /etc/hosts
file pointing to 127.0.0.1
and must run minikube tunnel
to expose the ingress
127.0.0.1 allthesauce.localk8 127.0.0.1 aaahotsauces.localk8 127.0.0.1 heatclinic.localk8 127.0.0.1 admin.localk8
Inside the helm
folder, you should find several installation scripts that can be used to install the Broadleaf ecosystem onto minikube.
cd helm
Go into the helm
directory
Run ./minikube-start.sh
This will begin by building out a minikube
container that mounts your current project’s working directory as an external host path. The reason this is needed is to bind the security
and config
directories as accessible file mounts for containers running in your minikube
instance. The containers running in minikube
will need access to the security artifacts inside these directories
Once the minikube
container is built, it will then begin the helm
installation process using the configured charts in this current helm
directory. It does this by calling the install.sh
script that is also located in this same directory
The execution of the script may take up to a few minutes to fully complete.
It will take some additional time for all the containers in your cluster to become fully available
We recommend a tool like the K8 Dashboard or Lens to monitor the statuses of the containers as they are being installed on the cluster
Typical Full Completion Time: 6-8 minutes from an empty K8 cluster (times may vary depending on the resources available to your machine)
Noteif you are running on MacOS, you will need to run sudo minikube tunnel
to expose the minikube ingress to your host machine
After all containers are up, you should now be able to visit the following in your browser:
Admin Console: https://admin.localk8
Username: master@test.com
Password: <dynamically generated>
Notethe default passwords for the initial admin master users are dynamically generated and can be found in your security
folder in yourcredentials-report.env
file.
Because you enabled the "Hot Sauce Demo" on your manifest.yml
,
the following demo storefronts are also accessible:
Storefront 1: https://heatclinic.localk8
Storefront 2: https://aaahotsauces.localk8
For this tutorial you will need:
a standard
GKE cluster (minimum K8 version of 1.24.x
) - we recommend a BS3
or BM3
as a baseline for a typical "DEV" cluster. More information about the BM3
terminology can be found in our scalability report and reference guide here. For GKE, this typically equates to a 3 node pool of machine type n2-standard-4
or n2-standard-8
using the default (cos_containerd
OS)
gcloud
CLI installed locally
docker
installed locally
kubectl
installed locally
helm
installed locally
The starter project and demo data assumes a static and pre-defined domain name that supports subdomains. Be sure to have one ready or in mind before you start this guide.
For the purposes of this tutorial we will assume the following domain *.broadleaftutorial.com
with the following provisioned application URLs:
https://admin.broadleaftutorial.com - for the back-office unified microservices admin
https://heatclinic.broadleaftutorial.com - for demo storefront 1
https://aaahotsauces.broadleaftutorial.com - for demo storefront 2
Note
|
It’s preferable that you have the ability to control the DNS and can add a CNAME or A record to point to an accessible ingress, however if that is not possible, you can change your /etc/hosts file for individual local testing purposes |
Next, let’s change the contents of your values-local.yaml
files in the helm
directory to use your chosen domain (in our case, *.broadleaftutorial.com
) as well as override some default values to support cloud based
deployments.
Go ahead and make the following changes to these files:
helm/blc-auth/values-local.yaml
extraEnvs: - name: SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINDEFAULTREDIRECTURI value: https://admin.broadleaftutorial.com - name: SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT1 value: https://admin.broadleaftutorial.com/callback - name: SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT2 value: https://admin.broadleaftutorial.com/silent-callback.html - name: SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT3 value: https://admin.broadleaftutorial.com/
helm/blc-browse/values-local.yaml
persistence: # If you wish to support multiple Flex Package replicas containing the "asset" microservice, # you will want to provision a PV that supports a ReadWriteMany access mode. # NOTE: support can vary by resource provider # See https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes for details. asset: name: blc-assets-pvc enabled: true accessModes: - ReadWriteMany size: 20Gi annotations: "helm.sh/resource-policy": keep mountPath: /broadleaf/assets storageClassName: "nfs"
Notefor this tutorial, we will be installing a sample nfs server for this storage class
helm/blc-job-data/values-local.yaml
extraEnvs: - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_BLCTENANTNAME" value: "BLC Tutorial Tenant" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_BLCTENANTIDENTIFIERVALUE" value: "admin.broadleaftutorial.com" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTAAAIDTYPE" value: "DOMAIN_PREFIX" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTAAAIDVALUE" value: "aaahotsauces" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTHEATIDTYPE" value: "DOMAIN_PREFIX" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTHEATIDVALUE" value: "heatclinic" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTALLSAUCEIDTYPE" value: "DOMAIN_PREFIX" - name: "BROADLEAF_TENANT_LIQUIBASE_PARAMETERS_HEATCLINICTENANTALLSAUCEIDVALUE" value: "allthesauce" - name: "SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINDEFAULTREDIRECTURI" value: 'https://admin.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT1" value: 'https://admin.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT2" value: 'https://admin.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_BLCAUTHADMINCLIENTREDIRECT3" value: 'https://admin.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAADEFAULTREDIRECTURI" value: 'https://aaahotsauces.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT1" value: 'https://aaahotsauces.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT2" value: 'https://aaahotsauces.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT3" value: 'https://www.aaahotsauces.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT5" value: 'https://www.aaahotsauces.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT7" value: 'https://aaahotsauces.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHAAACLIENTREDIRECT9" value: 'https://www.aaahotsauces.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATDEFAULTREDIRECTURI" value: 'https://heatclinic.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT1" value: 'https://heatclinic.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT2" value: 'https://www.heatclinic.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT4" value: 'https://www.heatclinic.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT5" value: 'https://www.heatclinic.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT7" value: 'https://heatclinic.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHHEATCLIENTREDIRECT9" value: 'https://heatclinic.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCEDEFAULTREDIRECTURI" value: 'https://allthesauce.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT1" value: 'https://www.allthesauce.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT3" value: 'https://www.allthesauce.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT4" value: 'https://www.allthesauce.broadleaftutorial.com/silent-callback.html' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT6" value: 'https://allthesauce.broadleaftutorial.com/callback' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT7" value: 'https://allthesauce.broadleaftutorial.com' - name: "SPRING_LIQUIBASE_PARAMETERS_HEATCLINICAUTHALLSAUCECLIENTREDIRECT8" value: 'https://allthesauce.broadleaftutorial.com/silent-callback.html'
helm/blc-supporting/values-local.yaml
env: BROADLEAF_SEARCH_SOLR_SERVER_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181 BROADLEAF_SEARCH_SOLR_SERVER_CHROOT: /solr extraEnvs: - name: SPRING_LIQUIBASE_PARAMETERS_BLCTENANTIDENTIFIERVALUE value: admin.broadleaftutorial.com - name: BROADLEAF_TENANT_URLRESOLVER_APPLICATION_SCHEME value: https - name: BROADLEAF_TENANT_URLRESOLVER_APPLICATION_DOMAIN value: broadleaftutorial.com - name: BROADLEAF_TENANT_URLRESOLVER_APPLICATION_PORT value: '443' - name: BROADLEAF_TENANT_URLRESOLVER_ADMIN_SCHEME value: https - name: BROADLEAF_TENANT_URLRESOLVER_ADMIN_DOMAIN value: admin.broadleaftutorial.com - name: BROADLEAF_TENANT_URLRESOLVER_ADMIN_PORT value: '443'
helm/local-ingress/templates/ingress.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ms-basic-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "20m" nginx.ingress.kubernetes.io/proxy-body-size: "20m" spec: tls: - hosts: - heatclinic.broadleaftutorial.com - aaahotsauces.broadleaftutorial.com - allthesauce.broadleaftutorial.com - admin.broadleaftutorial.com secretName: ingress-tls rules: - host: heatclinic.broadleaftutorial.com http: paths: - path: / pathType: Prefix backend: service: name: commercegateway port: number: 8456 - host: aaahotsauces.broadleaftutorial.com http: paths: - path: / pathType: Prefix backend: service: name: commercegateway port: number: 8456 - host: allthesauce.broadleaftutorial.com http: paths: - path: / pathType: Prefix backend: service: name: commercegateway port: number: 8456 - host: admin.broadleaftutorial.com http: paths: - path: / pathType: Prefix backend: service: name: admingateway port: number: 8446
In contrast to the minikube
setup where we are able to "mount" your local file system to containers running in your minikube local instance, we won’t be able to do the same with a cluster running on GKE. Instead, we will create Kubernetes secrets from the contents within the config
directory and mount the K8 secret as a file-backed repo for the Config Service. In your helm
directory, we’ve provided a utility script to help with this process.
cd helm
directory
run ./generate-local-k8-config-secrets.sh
this script will look in the parent directory of the current helm
directory for a config
folder in which it will attempt to create Kubernetes secret files from them
If successful, you should see the files do-not-commit-secure-secret.yaml
and do-not-commit-insecure-secret.yaml
emitted to the helm
directory
keep note of these new K8 secret files as they will be used for installation
NoteAgain as mentioned earlier, for a real project setup, we recommend a GitOps approach by configuring your Config Server to point to Git Backends to manage your secure
andinsecure
property files, however for simplicity we are using afile-based
approach for this tutorial.
Notethe installation process for this tutorial assumes that all resources will be installed to the default
namespace
Authenticate with the gcloud
sdk (gcloud auth login
)
Grab the connection info for your GKE cluster associated with your Google Project
gcloud container clusters get-credentials <my-cluster> --zone <my-zone> --project <my-project>
Validate that that your current kubectl
context is pointing to the GKE cluster you are connected to. (i.e. run kubectl config current-context
)
cd into the helm
directory
run: LOCAL_INITIALIZR_PROJECT_DIR_PATH=../ ./install.sh -k true -e local -n true
The -k
flag indicates that you’ve generated the do-not-commit-secure-secret.yaml
and the do-not-commit-insecure-secret.yaml
previously and the installation script should apply these as part of the installation process.
The -e
flag allows the script to apply any environment specific logic as part of the installation. For example, apply any blc-chart/values-<env>.yaml
files as an override as part of a helm installation
The -n
flag indicates that the process should provision a sample nfs server
. This is just used as a reference to supports ReadWriteMany
file mounts for the Broadleaf Asset Service. Typically you would integrate with a cloud native solution that supports ReadWriteMany
for the blc-assets-pvc
TIP: the ./install.sh
script is structured to be idempotent (i.e. you have the ability to run this same script multiple times. It installs the charts if they are not already installed. If they are already installed, it upgrades them.)
The installation script may take a few minutes to complete
Complete Startup Time: from an empty cluster to all containers being available may take up to 6-8 minutes
Finally, we can create an NGINX Controller For GKE (https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke):
kubectl create clusterrolebinding cluster-admin-binding \ --clusterrole cluster-admin \ --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
After a few minutes applying the NGINX controller, you should be assigned an IP
for the ms-basic-ingress
that was created
kubectl describe ingress ms-basic-ingress
Grab this IP and either create a CNAME
or A
record on your DNS Provider or change your local /etc/hosts
file to point your configured domains to this allocated IP
After all containers are up, you should now be able to visit the following in your browser:
Admin Console: https://admin.broadleaftutorial.com
Username: master@test.com
Password: <dynamically generated>
Notethe default passwords for the initial admin master users are dynamically generated and can be found in your security
folder in yourcredentials-report.env
file.
Because you enabled the "Hot Sauce Demo" on your manifest.yml
,
the following demo storefronts are also accessible:
Storefront 1: https://heatclinic.broadleaftutorial.com
Storefront 2: https://aaahotsauces.broadleaftutorial.com
The helm:generate
plugin generates a values-local.yaml
with override configurations that are catered to deploying a "minimal evaluation" footprint i.e. some values that are specified in the values-local.yaml
explicitly specify single node deployments for certain services to make deploying to a local developer easier to work with.
Specifically, with the values-local.yaml
files, we’ve configured properties to only deploy a "single node" Zookeeper, Kafka, and Solr cluster.
Examples of this explicit "single node" configuration can be found in these files:
helm/blc-common-env/values-local.yaml
data: SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS: 'kafkacluster-0.kafkacluster-headless.default.svc.cluster.local:9094' SPRING_CLOUD_STREAM_KAFKA_BINDER_ZKNODES: 'zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181'
helm/blc-configserver/values-local.yaml
env: SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS: 'kafkacluster-0.kafkacluster-headless.default.svc.cluster.local:9094' SPRING_CLOUD_STREAM_KAFKA_BINDER_ZKNODES: 'zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181'
helm/blc-[flexpackage]/values-local.yaml
env: BROADLEAF_SEARCH_SOLR_SERVER_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181
helm/blc-job-solr-create-collections/values-local.yaml
env: ZK_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181 CHECK_SOLR_LIST: '"solr-0.solr-headless:8983/solr/admin/cores?action=STATUS"' BLC_COLLECTIONS_REPLICATION_FACTOR: 1
helm/blc-job-zk-upconfig/values-local.yaml
env: ZK_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181
helm/blc-kafka/values-local.yaml
replicaCount: 1 env: KAFKA_ZOOKEEPER_CONNECT: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181/broadleaf/kafka KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 CHECK_ZK_LIST: '"zookeeper"'
helm/blc-solr/values-local.yaml
replicaCount: 1 env: SPRING_CLOUD_STREAM_KAFKA_BINDER_BROKERS: kafkacluster-0.kafkacluster-headless.default.svc.cluster.local:9094 SPRING_CLOUD_STREAM_KAFKA_BINDER_ZKNODES: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181 ZK_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181 CHECK_ZK_LIST: '"zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181"'
helm/blc-solr-exporter/values-local.yaml
replicaCount: 1 env: ZK_HOST: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181 CHECK_ZK_LIST: '"zookeeper-0.zookeeper-headless.default.svc.cluster.local:2181"'
helm/blc-zookeeper/values-local.yaml
replicaCount: 1 zkServers: zookeeper-0.zookeeper-headless.default.svc.cluster.local:2888:3888
In an actual production environment, it is recommended that you deploy a minimum 3 node StatefulSet
for Zookeeper
, Kafka
, and Solr
to maintain quorum capabilities. This is already the default configuration for the "base" values.yaml
chart. You’ll want to make sure to utilize that minimum 3-node "base" configuration for your production environments.