Broadleaf Microservices
  • v1.0.0-latest-prod

Docker Configuration

Important
This document covers release train 2.x.x, and beyond. For 1.x.x releases, refer to the prior documentation at docker-configuration-1-x.

Introduction

Broadleaf’s starter projects include various modules that produce Docker images. This document covers file contents, standard builds, configurations, and customizations.

Find all Docker build configurations in the starter projects

For backend projects, Broadleaf’s Docker build configurations typically involve two main components: a Dockerfile and a pom.xml containing a docker profile that builds the image for it. This is because we prefer to build Docker images as part of the Maven lifecycle. For frontend projects, the process typically just involves a Dockerfile.

You can execute find . -name Dockerfile from the root of your starter project repository to get a list of all paths at which Dockerfiles are found.

Then, for backend projects, you can use those paths to also find the corresponding pom.xml in charge of building the image, since they are usually in the same directory.

See Appendix B for more build considerations.

Secure by Default (since 2.0.1)

Starting with 2.0.1-GA, the manifest-based starter will default to generating Alpine Linux images for all Broadleaf components. This platform is more slim, and more secure as a result. The tradeoff is that this flavor only comes in amd64. However, this is generally satisfactory for most production applications. There is an option to instead utilize an Ubuntu Linux image, which is available as multiplatform with both arm64 and amd64 support. The tradeoff here is that it is generally less secure, with more Linux OS vulnerabilities. Configuration options are covered below.

Configuration Matrix

Project Type Image Type Platform Security Notes Setup

Manifest-Based

Alpine Linux

amd64

More Secure

Recommended

Section 1

Manifest-Based

Ubuntu Linux

multiplatform

Less Secure

Recommended if arm64 in prod

Section 2

Non-Manifest

Alpine Linux

amd64

More Secure

Recommended

Section 3

Non-Manifest

Ubuntu Linux

multiplatform

Less Secure

Recommended if arm64 in prod

Section 4

Any

Custom

custom

custom

Section 5

Section 1 : Setting up for Alpine Linux using a Manifest-based project

This applies if you created your project by downloading a manifest.zip file from https://start.broadleafcommerce.com. No additional setup is required of you when generating a new project from your manifest, as this is the default behavior since 2.0.1. Alternatively, If you are updating an existing 2.0.0 project, then refer to the 2.0.1 release notes for further instructions.

Considerations

  • Generally, developers do not build Docker images locally. Instead, they run the components directly from command line or IDE. In this case, the Docker output is only a concern for CICD. However, from time to time, developers may want to run all components (including the microservices) from docker-compose or k8s locally.

  • Since the platform of the generated images is amd64, arm architecture machines will require emulation (e.g. Apple silicon). Generally, docker setups default to qemu, but if running on a mac, it is advisable to enable Rosetta2 in your docker environment for better performance.

  • You may also choose to edit Docker artifacts manually. See Appendix A for examples.

Section 2 : Setting up for Ubuntu Linux using a Manifest-based project

This applies if you created your project by downloading a manifest.zip file from https://start.broadleafcommerce.com. To enable this state, set the project.useAlpineJavaImages property to false in manifest.yml and then rebuild your manifest library using mvn clean install. Any subsequent execution of mvn flex:generate from the command line in your manifest directory will result in updated docker artifacts that will support the multiplatform Ubuntu image. Alternatively, If you are updating an existing 2.0.0 project, then refer to the 2.0.1 release notes for further instructions.

Considerations

  • Generally, developers do not build Docker images locally. Instead, they run the components directly from command line or IDE. In this case, the Docker output is only a concern for CICD. However, from time to time, developers may want to run all components (including the microservices) from docker-compose or k8s locally.

  • Since the platform of the generated images will support amd64 and/or arm64, the performance will be optimum on either platform (at the cost of reduced security).

  • This may be the best option if you plan on deploying to an arm64 architecture in production.

  • You may also choose to edit Docker artifacts manually. See Appendix A for examples.

Section 3 : Setting up for Alpine Linux using a Non-Manifest project

This applies if your project pom files inherit from the 2.x.x starter, but does not leverage a manifest module. There is no automation to update the project in this case, and you will need to edit Docker artifacts manually. See Appendix A for examples.

Section 4 : Setting up for Ubuntu Linux using a Non-Manifest project

This applies if your project pom files inherit from the 2.x.x starter, but does not leverage a manifest module. There is no automation to update the project in this case, and you will need to edit Docker artifacts manually. See Appendix A for examples.

Section 5 : Setting up a custom Docker image

  • It is not a requirement to use Alpine or Ubuntu Linux images based on Eclipse Temurin 17

  • You may roll your own images by editing the Docker artifacts (see Appendix A for examples) to your liking

  • Generally, you will still benefit from the same build lifecycles that you would use for standard Broadleaf images

Appendix A : Docker Artifacts

Figure 1 : docker-bake.hcl

# Syntax reference: https://docs.docker.com/engine/reference/commandline/buildx_bake

# The fully qualified image tag (including any registry prefix) to use for the main image.
variable "FULLY_QUALIFIED_MAIN_IMAGE_TAG" {}

# The path to the application JAR file, supplied as a build argument to the Dockerfile.
variable "JAR_FILE" {}

# Whether or not to only build images for the architecture of the machine that is running the
# build process (instead of multi-platform images). This will also force the image output type to
# 'docker', which will load the image to the local registry.
#
# Intended for local-development purposes.
variable "BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY" {
  default = false
}

# Whether the system should limit the build to an explicit platform
variable "LIMIT_PLATFORM" {
  default = "none"
}

# Configure the output type (ex: 'type=registry')
# Only honored when 'BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY' is false.
variable "OUTPUT_TYPE" {
  default = ""
}

function "get_target_platforms" {
  params = []
  # Empty strings are filtered out, so if 'BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY', then
  # platforms will be empty and only the current platform will be built.
  #
  # Note - this empty-list approach is used because specifying the built-in variable
  # 'BAKE_LOCAL_PLATFORM' didn't work. On Apple silicon, the value is resolved to 'darwin/arm64/v8',
  # and that causes errors when resolving the boot layer image. The 'normal' default-platform
  # behavior from Docker (when platform is omitted) works correctly in resolving and building
  # a 'linux/arm64' image.
  result = LIMIT_PLATFORM == "none" ? [ BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY ? "" : "linux/amd64",
    BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY ? "" : "linux/arm64"] : ["${LIMIT_PLATFORM}"]
}

function "get_output_config" {
  params = []
  result = [ BUILD_AND_LOAD_LOCAL_PLATFORM_ONLY ? "type=docker" : "${OUTPUT_TYPE}" ]
}

# Should contain all the targets that should be built by default
group "default" {
  targets = ["main_image"]
}

target "main_image" {
  dockerfile = "Dockerfile"
  platforms = get_target_platforms()
  tags = [ "${FULLY_QUALIFIED_MAIN_IMAGE_TAG}" ]
  args = {
    JAR_FILE = "${JAR_FILE}"
  }
  output = get_output_config()
}

Figure 2a : Alpine Dockerfile

FROM eclipse-temurin:17.0.10_7-jdk-alpine as builder
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
RUN java -Djarmode=layertools -jar app.jar extract

# Grant the group the same permissions as the owner user for consistency
# Note that we do this permission change in the builder image to avoid creating large intermediary
# layers in the final runner image.
#
# Note - for the most part, the COPY command below will honor the permissions set for these files in the builder
# image. However, for any leading directories Docker has to create itself, permissions will match
# the default umask for Docker, which is not configurable at the time of writing.
# This is technically not an issue in most cases, since read access is always available and write
# access to these directories is almost never necessary
RUN chmod -R g=u "dependencies/"
RUN chmod -R g=u "spring-boot-loader/"
RUN chmod -R g=u "snapshot-dependencies/"
RUN chmod -R g=u "application/"

FROM eclipse-temurin:17.0.10_7-jre-alpine

VOLUME /tmp

# Expect all application files to be copied to this directory
RUN mkdir "/blc-app" && chown -R 1000:0 "/blc-app" && chmod -R g=u "/blc-app"

ADD ./docker-exec.sh docker-exec.sh
RUN chown 1000:0 "docker-exec.sh" && chmod ug+x "docker-exec.sh"
USER root
ADD ./tools-exec.sh tools-exec.sh
RUN chmod ug+x "tools-exec.sh"
ENV ALPINE="-alpine"
ENV IMAGEMAGICK_ACTIVE=""
ENV UPGRADE=""
RUN ./tools-exec.sh && rm -rf tools-exec.sh

# NOTE - because we aim to be compatible with both OpenShift (runs as random UID) and standard
# Kubernetes, we cannot have any user-specific configuration in our Dockerfile.
# For compatibility with standard Kubernetes, we explicitly switch to a non-root UID. OpenShift
# will ignore these settings and run as an arbitrary UID in the root group.
USER 1000

# We assume 1000 UID for standard Kubernetes, and an arbitrary UID + root group (0)
# for OpenShift. Thus, grant both of those ownership here.
# Also, move the application files into the "/blc-app" directory (used by the entrypoint script).
COPY --chown=1000:0 --from=builder dependencies/ /blc-app
COPY --chown=1000:0 --from=builder spring-boot-loader/ /blc-app
COPY --chown=1000:0 --from=builder snapshot-dependencies/ /blc-app
COPY --chown=1000:0 --from=builder application/ /blc-app

EXPOSE 8443
EXPOSE 8000
EXPOSE 8080
EXPOSE 9001


# Use the 'exec form' of entrypoint to ensure signals like 'KILL' are forwarded to the running process.
# https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
ENTRYPOINT ["./docker-exec.sh"]

Figure 2b : Ubuntu Dockerfile

FROM eclipse-temurin:17.0.10_7-jdk as builder
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
RUN java -Djarmode=layertools -jar app.jar extract

# Grant the group the same permissions as the owner user for consistency
# Note that we do this permission change in the builder image to avoid creating large intermediary
# layers in the final runner image.
#
# Note - for the most part, the COPY command below will honor the permissions set for these files in the builder
# image. However, for any leading directories Docker has to create itself, permissions will match
# the default umask for Docker, which is not configurable at the time of writing.
# This is technically not an issue in most cases, since read access is always available and write
# access to these directories is almost never necessary
RUN chmod -R g=u "dependencies/"
RUN chmod -R g=u "spring-boot-loader/"
RUN chmod -R g=u "snapshot-dependencies/"
RUN chmod -R g=u "application/"

FROM eclipse-temurin:17.0.10_7-jre

VOLUME /tmp

# Expect all application files to be copied to this directory
RUN mkdir "/blc-app" && chown -R 1000:0 "/blc-app" && chmod -R g=u "/blc-app"

ADD ./docker-exec.sh docker-exec.sh
RUN chown 1000:0 "docker-exec.sh" && chmod ug+x "docker-exec.sh"
USER root
ADD ./tools-exec.sh tools-exec.sh
RUN chmod ug+x "tools-exec.sh"
ENV ALPINE=""
ENV IMAGEMAGICK_ACTIVE=""
ENV UPGRADE=""
RUN ./tools-exec.sh && rm -rf tools-exec.sh

# NOTE - because we aim to be compatible with both OpenShift (runs as random UID) and standard
# Kubernetes, we cannot have any user-specific configuration in our Dockerfile.
# For compatibility with standard Kubernetes, we explicitly switch to a non-root UID. OpenShift
# will ignore these settings and run as an arbitrary UID in the root group.
USER 1000

# We assume 1000 UID for standard Kubernetes, and an arbitrary UID + root group (0)
# for OpenShift. Thus, grant both of those ownership here.
# Also, move the application files into the "/blc-app" directory (used by the entrypoint script).
COPY --chown=1000:0 --from=builder dependencies/ /blc-app
COPY --chown=1000:0 --from=builder spring-boot-loader/ /blc-app
COPY --chown=1000:0 --from=builder snapshot-dependencies/ /blc-app
COPY --chown=1000:0 --from=builder application/ /blc-app

EXPOSE 8443
EXPOSE 8000
EXPOSE 8080
EXPOSE 9001


# Use the 'exec form' of entrypoint to ensure signals like 'KILL' are forwarded to the running process.
# https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
ENTRYPOINT ["./docker-exec.sh"]

Figure 3 : tools-exec.sh

#!/bin/sh

# Conditional assembly for container type and platform
if [ "$ALPINE" ]; then
  apk update
else
  apt-get update
fi
if [ "$IMAGEMAGICK_ACTIVE" ]; then
  if [ "$ALPINE" ]; then
    apk add --no-cache imagemagick jpeg
  else
    apt-get install -y imagemagick jpeg
  fi
fi
if [ "$UPGRADE" ]; then
  if [ "$ALPINE" ]; then
    apk add --upgrade apk-tools && apk upgrade --available
  else
    apt-get upgrade -y
  fi
fi

Figure 4 : docker-exec.sh

#!/bin/sh

# Used as the 'entrypoint' script in base images to start a Spring Boot application.

if [ "$DEBUG_PORT" ]; then
  export JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:$DEBUG_PORT"
fi

export JAVA_OPTS="$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -Dspring.cloud.bootstrap.enabled=true --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-opens=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-opens=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.time.format=ALL-UNNAMED"

# Expect the actual application files to be in this directory. Default is '/blc-app', which
# the base image Dockerfiles create. However, if the final image defines it in a separate location,
# they can override the default via this 'ENTRYPOINT_BLC_APP_DIRECTORY' environment variable.
if [ -z "$ENTRYPOINT_BLC_APP_DIRECTORY" ]; then
  echo "ENTRYPOINT_BLC_APP_DIRECTORY was not explicitly provided, using default as launch location: '/blc-app'"
  export ENTRYPOINT_BLC_APP_DIRECTORY="/blc-app"
else
  echo "ENTRYPOINT_BLC_APP_DIRECTORY was explicitly provided, using it as launch location: '$ENTRYPOINT_BLC_APP_DIRECTORY'"
fi
cd "$ENTRYPOINT_BLC_APP_DIRECTORY" || exit 1

# The container will be running either in standard Kubernetes as a consistent user
# (ex: UID 1000), or in OpenShift as a random UID under the root group.
#
# While volume mount configuration is responsible for basic ownership and permission settings, it
# does not guarantee equal file access by all containers, particularly for files _created_ by the
# containers. The default 'umask' setting on many distributions is '022', which grants the owner
# user 'read'/'write', but only grants the owner group 'read'. This would be problematic in OpenShift,
# where the UID is not consistent and only the GID is. We need to ensure that if a container with
# UID 1 creates a file, a container with UID 2 will be able to have the same access ('read'/'write')
# to that file.
#
# By overriding the umask of the entrypoint process to a sensible default of '002', the owner user
# and group will both be given equivalent 'read'/'write' access to the files (and
# 'read'/'write'/'search' on directories). This ensures all files created by the running application
# will be equally accessible by other instances of the application (when targeting a shared volume),
# even when their UIDs differ.
if [ "$DISABLE_ENTRYPOINT_UMASK_OVERRIDE" ]; then
  echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was set, so not adjusting umask configuration"
else
  echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was not set, so setting umask configuration to '0002'"
  umask 0002
  printf "umask is now '%s'\n" "$(umask)"
fi

# Moved into a shell script because the above 'export' statements cannot be retrieved
# between multiple statements in a Dockerfile
echo "Starting Java with the arguments '$JAVA_OPTS'"

# Use 'exec' to ensure the running process responds to signals like 'KILL'
#
# This command assumes the image will _not_ use a Spring Boot fat JAR and will instead
# run something like 'java -Djarmode=layertools -jar app.jar extract' to pre-explode the JAR
# contents and copy them directly.
exec java $JAVA_OPTS org.springframework.boot.loader.JarLauncher

Figure 5 : pom.xml (Platform Maven Property)

  • Generally used to force docker buildx bake to use an explicit platform

  • Used when building an Alpine image targeted at amd64 architecture

  • Assumes component pom inherits from broadleaf-microservices-flex-parent

  • Assumes maven build is leveraging the inherited maven-exec-plugin declaration for docker build from broadleaf-microservices-flex-parent (rather than rolling your own). The property is referenced in the parent declaration of the plugin.

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>com.broadleafcommerce.microservices</groupId>
        <artifactId>broadleaf-microservices-flex-parent</artifactId>
        <version>2.0.1-GA</version>
        <relativePath/>
    </parent>
    ...
    <artifactId>microservice-flexpackage-auth</artifactId>
    <groupId>com.example.microservices</groupId>
    <name>Auth Flexpackage Starter</name>
    <description>Auth Flexpackage Starter</description>
    <version>1.0.0-SNAPSHOT</version>
    <properties>
        ...
        <limit-image-target-platform>linux/amd64</limit-image-target-platform>
        ...
    </properties>
    ...
</project>

Figure 6 : docker-compose.yml

  • Broadleaf production-ready images for gateway and config are available in Alpine flavors

  • For your project build, if using Alpine, it is advisable to use the platform directive as well (see auth example below)

version: '3.2'
services:
  admingateway:
    ...
    platform: linux/amd64
    image: repository.broadleafcommerce.com:5001/broadleaf/admingateway-monitored-alpine:2.0.1-GA
    ...
  commercegateway:
    ...
    platform: linux/amd64
    image: repository.broadleafcommerce.com:5001/broadleaf/commercegateway-monitored-alpine:2.0.1-GA
    ...
  auth:
    ...
    platform: linux/amd64
    image: repository.broadleafcommerce.com:5001/broadleaf-demo/auth:1.0.0-SNAPSHOT
    ...
  config:
    ...
    platform: linux/amd64
    image: repository.broadleafcommerce.com:5001/broadleaf/broadleaf-config-server-platform-alpine:2.0.1-GA
    ...
  ...

Appendix B : Build Considerations

Prerequisites

  • The machine running the build process must have QEMU emulators installed for all target platforms, and they must be accessible to Docker. Usually, Docker Desktop comes with these emulators, but if your installation doesn’t, you can install them manually as documented here.

  • If performing a multiplatform build, the machine running the build process must have created a new builder instance which uses the docker-container driver, and it should be set as the currently active builder instance.

    # Note - you only have to run this once on your machine, not for every project
    # This spins up a Docker container that runs the builder instance - the name doesn't matter
    docker buildx create --name "my-blc-multiplat-builder" --driver="docker-container" --bootstrap --use

    Without this, you would see an error like so:

    error: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")

Frontend projects

Frontend projects just use a Dockerfile and are expected to be built directly with docker commands like docker buildx build (with additions like the --platform argument).

Note
In some cases, the Dockerfile may expect yarn build (or similar) to have been run already. You can review each Dockerfile for more information.

Backend projects

Backend projects have special Maven profiles (typically called docker) in their pom.xml which invoke docker buildx bake (documented here by Docker). This in turn leverages both a docker-bake.hcl file (sometimes commonly shared between multiple modules in a project) and a Dockerfile that together describe the build configuration.

Note
docker buildx bake was chosen for its maintainability advantages, as it allows complexities to be placed into the docker-bake.hcl file (documented here by Docker) and simplifies the configuration of the Maven profile.