Broadleaf Microservices
  • v1.0.0-latest-prod

Docker Configuration

Introduction

Broadleaf’s starter projects include various modules that produce Docker images. The Docker build configuration in those modules follows what Broadleaf uses internally for its own sample images. However, this configuration is not intended for direct client use and should be replaced.

There are several reasons for this:

Clients should create their own Docker build configurations in ways that suit their needs. The Broadleaf sample configurations should merely be used as reference.

Find all Docker build configurations in the starter projects

For backend projects, Broadleaf’s Docker build configurations typically involve two main components: a Dockerfile and a pom.xml containing a docker profile that builds the image for it. This is because we prefer to build Docker images as part of the Maven lifecycle. For frontend projects, the process typically just involves a Dockerfile.

You can execute find . -name Dockerfile from the root of your starter project repository to get a list of all paths at which Dockerfiles are found.

Then, for backend projects, you can use those paths to also find the corresponding pom.xml in charge of building the image, since they are usually in the same directory.

Understand existing sample Docker build configuration

In an effort to support both ARM (arm64) and X86 (amd64) platforms, Broadleaf changed its Docker build processes starting with version 1.7.4 to output multi-platform images.

Multi-platform (versions 1.7.4 and beyond)

The sample Docker configurations in versions 1.7.4 and beyond use docker buildx (documented here by Docker) to build multi-platform images.

Prerequisites and limitations for multi-platform Docker builds

Prerequisites
  • The machine running the build process must have QEMU emulators installed for all target platforms, and they must be accessible to Docker. Usually, Docker Desktop comes with these emulators, but if your installation doesn’t, you can install them manually as documented here.

  • The machine running the build process must have created a new builder instance which uses the docker-container driver, and it should be set as the currently active builder instance.

    # Note - you only have to run this once on your machine, not for every project
    # This spins up a Docker container that runs the builder instance - the name doesn't matter
    docker buildx create --name "my-blc-multiplat-builder" --driver="docker-container" --bootstrap --use

    Without this, you would see an error like so:

    error: multiple platforms feature is currently not supported for docker driver. Please switch to a different driver (eg. "docker buildx create --use")
Limitations
  • At this time, a local Docker registry does not support storing multi-platform images (this is a general Docker limitation). By default, the backend sample Docker build process (when engaged with mvn package instead of mvn deploy) produces multi-platform images and then discards the result. This means you can still locally engage the multi-platform build to test all platforms can be correctly built, it just won’t store the final images anywhere. The command may even emit a warning like this:

    WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load

Frontend projects

Frontend projects just use a Dockerfile and are expected to be built directly with docker commands like docker buildx build (with additions like the --platform argument).

Note
In some cases, the Dockerfile may expect yarn build (or similar) to have been run already. You can review each Dockerfile for more information.

Backend projects

Backend projects have special Maven profiles (typically called docker) in their pom.xml which invoke docker buildx bake (documented here by Docker). This in turn leverages both a docker-bake.hcl file (sometimes commonly shared between multiple modules in a project) and a Dockerfile that together describe the build configuration.

Note
docker buildx bake was chosen for its maintainability advantages, as it allows complexities to be placed into the docker-bake.hcl file (documented here by Docker) and simplifies the configuration of the Maven profile.
Local development workaround

To bypass multi-platform Docker limitations and support local development requirements, the sample Maven profiles typically also expose a buildLocalDockerOnly property. By setting this to true (ex: mvn package -Pdocker -DbuildLocalDockerOnly=true), the configuration will only build the image for the architecture running the build process, and the image will be loaded into the local Docker registry (it’ll be available in docker images). This allows a developer to run their locally built image.

Single-platform (versions 1.7.3 and prior)

The sample Docker configurations in versions 1.7.3 and prior only support single-platform Docker builds to produce X86 images.

Note

To be absolutely clear, Broadleaf’s applications have always been platform-agnostic and themselves require no changes to work on other platforms such as ARM.

Java applications just require an ARM-compatible JRE/JDK (see Temurin releases), and frontend applications just require an ARM-compatible JavaScript runtime.

The 'lack of multi-platform support' in this context is specifically regarding the sample Docker images containing those applications.

Frontend projects

Frontend projects just use a Dockerfile and are expected to be built directly with docker commands like docker build.

Note
In some cases, the Dockerfile may expect yarn build (or similar) to have been run already. You can review each Dockerfile for more information.

Backend projects

Backend projects have special Maven profiles (typically called docker) in their pom.xml which invoke the Maven plugin dockerfile-maven-plugin. The plugin then builds the image from the specified Dockerfile.

This configuration is quite simple, and you can follow the dockerfile-maven-plugin docs to understand it more.

Replace existing sample Docker configuration

Due to differences in client needs and expectations, this section is purposefully not well-defined.

In some cases, clients may drop all sample Docker configuration in favor of their own custom procedures (ex: use completely different tooling to build the image such as Google Jib). In others, they may choose to just slightly tweak the configuration (ex: remove multi-platform support).

However, at a minimum, we suggest all clients at least make the following changes:

  • Replace the base image in all Dockerfiles to not use Broadleaf’s base image.

  • Change the image names/tags, as the defaults are unsuitable for clients (for example, they should target the client’s Docker registry instead of Broadleaf’s Docker registry)

Broadleaf Boot Layer Service Docker Base Image

By default, many of the starter projects have a Dockerfile whose base image is something like repository.broadleafcommerce.com:5001/broadleaf/boot-layer-service:…​-jre-…​ (ex: repository.broadleafcommerce.com:5001/broadleaf/boot-layer-service:11.0.14.1_1-jre-1).

This is a multi-platform base image used internally by Broadleaf to create sample images for its various projects.

Note
Versions before 1.7.4 may be using an older base image such as repository.broadleafcommerce.com:5001/broadleaf/boot-service-jdk11 or repository.broadleafcommerce.com:5001/broadleaf/boot-layer-service-jdk11. These are deprecated.

Broadleaf’s base images are not intended to be used by clients directly:

  • Updates to these images and new tags are not announced

  • Updates to these images are not guaranteed backwards compatible or stable

  • Use of the images requires authentication to the Broadleaf Docker registry

  • Use of an old base image may result in falling behind in security updates

We strongly advise clients create, use, and maintain their own base images instead of relying on Broadleaf’s base image.

For your reference only, below we have provided an example of source components used to create the Broadleaf base image. We build this image for multiple platforms using docker buildx.

The image defaults to running as a non-root user 1000 and the root group, and should be compatible with both standard Kubernetes and OpenShift.

  • Dockerfile

    # Used as a base image that other Spring Boot-based
    # docker containers can be based off of.
    #
    # This is an official image with multi-platform support.
    # OS is Ubuntu (at the time of writing, there is no multi-platform Alpine image)
    # Importantly, this supports 'linux/amd64' and 'linux/arm64/v8'
    # See https://hub.docker.com/_/eclipse-temurin
    FROM eclipse-temurin:11.0.14.1_1-jre
    
    VOLUME /tmp
    
    # Expect all consumers of this image to copy their application files to this directory
    RUN mkdir "/blc-app" && chown -R 1000:0 "/blc-app" && chmod -R g=u "/blc-app"
    
    ADD ./supporting/run-app.sh run-app.sh
    RUN chown 1000:0 "run-app.sh" && chmod ug+x "run-app.sh"
    
    # NOTE - because we aim to be compatible with both OpenShift (runs as random UID) and standard
    # Kubernetes, we cannot have any user-specific configuration in our Dockerfile.
    # For compatibility with standard Kubernetes, we explicitly switch to a non-root UID. OpenShift
    # will ignore these settings and run as an arbitrary UID in the root group.
    USER 1000
    
    # Use the 'exec form' of entrypoint to ensure signals like 'KILL' are forwarded to the running process.
    # https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
    ENTRYPOINT ["./run-app.sh"]
  • run-app.sh

    #!/bin/sh
    
    # Used as the 'entrypoint' script in base images to start a Spring Boot application.
    
    if [ "$DEBUG_PORT" ]; then
      export JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:$DEBUG_PORT"
    fi
    
    export JAVA_OPTS="$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom"
    
    # Expect the actual application files to be in this directory. Default is '/blc-app', which
    # the base image Dockerfiles create. However, if the final image defines it in a separate location,
    # they can override the default via this 'ENTRYPOINT_BLC_APP_DIRECTORY' environment variable.
    if [ -z "$ENTRYPOINT_BLC_APP_DIRECTORY" ]; then
      echo "ENTRYPOINT_BLC_APP_DIRECTORY was not explicitly provided, using default as launch location: '/blc-app'"
      export ENTRYPOINT_BLC_APP_DIRECTORY="/blc-app"
    else
      echo "ENTRYPOINT_BLC_APP_DIRECTORY was explicitly provided, using it as launch location: '$ENTRYPOINT_BLC_APP_DIRECTORY'"
    fi
    cd "$ENTRYPOINT_BLC_APP_DIRECTORY" || exit 1
    
    # The container will be running either in standard Kubernetes as a consistent user
    # (ex: UID 1000), or in OpenShift as a random UID under the root group.
    #
    # While volume mount configuration is responsible for basic ownership and permission settings, it
    # does not guarantee equal file access by all containers, particularly for files _created_ by the
    # containers. The default 'umask' setting on many distributions is '022', which grants the owner
    # user 'read'/'write', but only grants the owner group 'read'. This would be problematic in OpenShift,
    # where the UID is not consistent and only the GID is. We need to ensure that if a container with
    # UID 1 creates a file, a container with UID 2 will be able to have the same access ('read'/'write')
    # to that file.
    #
    # By overriding the umask of the entrypoint process to a sensible default of '002', the owner user
    # and group will both be given equivalent 'read'/'write' access to the files (and
    # 'read'/'write'/'search' on directories). This ensures all files created by the running application
    # will be equally accessible by other instances of the application (when targeting a shared volume),
    # even when their UIDs differ.
    if [ "$DISABLE_ENTRYPOINT_UMASK_OVERRIDE" ]; then
      echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was set, so not adjusting umask configuration"
    else
      echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was not set, so setting umask configuration to '0002'"
      umask 0002
      printf "umask is now '%s'\n" "$(umask)"
    fi
    
    # Moved into a shell script because the above 'export' statements cannot be retrieved
    # between multiple statements in a Dockerfile
    echo "Starting Java with the arguments '$JAVA_OPTS'"
    
    # Use 'exec' to ensure the running process responds to signals like 'KILL'
    #
    # This command assumes the image will _not_ use a Spring Boot fat JAR and will instead
    # run something like 'java -Djarmode=layertools -jar app.jar extract' to pre-explode the JAR
    # contents and copy them directly.
    exec java $JAVA_OPTS org.springframework.boot.loader.JarLauncher
  • Sample application Dockerfile that uses the above base

    # Application should have been built as a layered JAR, get the path to it in JAR_FILE
    FROM eclipse-temurin:11.0.14.1_1-jdk as builder
    ARG JAR_FILE
    ADD ${JAR_FILE} app.jar
    RUN java -Djarmode=layertools -jar app.jar extract
    
    # Grant the group the same permissions as the owner user for consistency
    # Note that we do this permission change in the builder image to avoid creating large intermediary
    # layers in the final runner image.
    #
    # Note - for the most part, the COPY command below will honor the permissions set for these files in the builder
    # image. However, for any leading directories Docker has to create itself, permissions will match
    # the default umask for Docker, which is not configurable at the time of writing.
    # This is technically not an issue in most cases, since read access is always available and write
    # access to these directories is almost never necessary
    RUN chmod -R g=u "dependencies/"
    RUN chmod -R g=u "spring-boot-loader/"
    RUN chmod -R g=u "snapshot-dependencies/"
    RUN chmod -R g=u "application/"
    
    # Base image reference
    FROM ${YOUR_BASE_IMAGE}
    
    # The base image assumes 1000 UID for standard Kubernetes, and an arbitrary UID + root group (0)
    # for OpenShift. Thus, grant both of those ownership here.
    # Also, move the application files into the "/blc-app" directory (defined by the base image and used
    # by the entrypoint script).
    COPY --chown=1000:0 --from=builder dependencies/ /blc-app
    COPY --chown=1000:0 --from=builder spring-boot-loader/ /blc-app
    COPY --chown=1000:0 --from=builder snapshot-dependencies/ /blc-app
    COPY --chown=1000:0 --from=builder application/ /blc-app
    
    # ... expose any ports, etc

Release Train 1.8.1 and beyond with optional Java 17

  • Dockerfile

    # Used as a base image that other Spring Boot-based
    # docker containers can be based off of.
    #
    # This is an official image with multi-platform support.
    # OS is Ubuntu (at the time of writing, there is no multi-platform Alpine image)
    # Importantly, this supports 'linux/amd64' and 'linux/arm64/v8'
    # See https://hub.docker.com/_/eclipse-temurin
    FROM eclipse-temurin:17.0.6_10-jre
    
    VOLUME /tmp
    
    # Expect all consumers of this image to copy their application files to this directory
    RUN mkdir "/blc-app" && chown -R 1000:0 "/blc-app" && chmod -R g=u "/blc-app"
    
    ADD ./supporting/run-app.sh run-app.sh
    RUN chown 1000:0 "run-app.sh" && chmod ug+x "run-app.sh"
    
    # NOTE - because we aim to be compatible with both OpenShift (runs as random UID) and standard
    # Kubernetes, we cannot have any user-specific configuration in our Dockerfile.
    # For compatibility with standard Kubernetes, we explicitly switch to a non-root UID. OpenShift
    # will ignore these settings and run as an arbitrary UID in the root group.
    USER 1000
    
    # Use the 'exec form' of entrypoint to ensure signals like 'KILL' are forwarded to the running process.
    # https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example
    ENTRYPOINT ["./run-app.sh"]
  • run-app.sh

    #!/bin/sh
    
    # Used as the 'entrypoint' script in base images to start a Spring Boot application.
    
    if [ "$DEBUG_PORT" ]; then
      export JAVA_OPTS="$JAVA_OPTS -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:$DEBUG_PORT"
    fi
    
    export JAVA_OPTS="$JAVA_OPTS -Djava.security.egd=file:/dev/./urandom --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED --add-opens=java.base/jdk.internal.misc=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED --add-opens=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED --add-opens=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.time=ALL-UNNAMED --add-opens=java.base/java.time.format=ALL-UNNAMED"
    
    # Expect the actual application files to be in this directory. Default is '/blc-app', which
    # the base image Dockerfiles create. However, if the final image defines it in a separate location,
    # they can override the default via this 'ENTRYPOINT_BLC_APP_DIRECTORY' environment variable.
    if [ -z "$ENTRYPOINT_BLC_APP_DIRECTORY" ]; then
      echo "ENTRYPOINT_BLC_APP_DIRECTORY was not explicitly provided, using default as launch location: '/blc-app'"
      export ENTRYPOINT_BLC_APP_DIRECTORY="/blc-app"
    else
      echo "ENTRYPOINT_BLC_APP_DIRECTORY was explicitly provided, using it as launch location: '$ENTRYPOINT_BLC_APP_DIRECTORY'"
    fi
    cd "$ENTRYPOINT_BLC_APP_DIRECTORY" || exit 1
    
    # The container will be running either in standard Kubernetes as a consistent user
    # (ex: UID 1000), or in OpenShift as a random UID under the root group.
    #
    # While volume mount configuration is responsible for basic ownership and permission settings, it
    # does not guarantee equal file access by all containers, particularly for files _created_ by the
    # containers. The default 'umask' setting on many distributions is '022', which grants the owner
    # user 'read'/'write', but only grants the owner group 'read'. This would be problematic in OpenShift,
    # where the UID is not consistent and only the GID is. We need to ensure that if a container with
    # UID 1 creates a file, a container with UID 2 will be able to have the same access ('read'/'write')
    # to that file.
    #
    # By overriding the umask of the entrypoint process to a sensible default of '002', the owner user
    # and group will both be given equivalent 'read'/'write' access to the files (and
    # 'read'/'write'/'search' on directories). This ensures all files created by the running application
    # will be equally accessible by other instances of the application (when targeting a shared volume),
    # even when their UIDs differ.
    if [ "$DISABLE_ENTRYPOINT_UMASK_OVERRIDE" ]; then
      echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was set, so not adjusting umask configuration"
    else
      echo "DISABLE_ENTRYPOINT_UMASK_OVERRIDE was not set, so setting umask configuration to '0002'"
      umask 0002
      printf "umask is now '%s'\n" "$(umask)"
    fi
    
    # Moved into a shell script because the above 'export' statements cannot be retrieved
    # between multiple statements in a Dockerfile
    echo "Starting Java with the arguments '$JAVA_OPTS'"
    
    # Use 'exec' to ensure the running process responds to signals like 'KILL'
    #
    # This command assumes the image will _not_ use a Spring Boot fat JAR and will instead
    # run something like 'java -Djarmode=layertools -jar app.jar extract' to pre-explode the JAR
    # contents and copy them directly.
    exec java $JAVA_OPTS org.springframework.boot.loader.JarLauncher
  • Sample application Dockerfile that uses the above base

    # Application should have been built as a layered JAR, get the path to it in JAR_FILE
    FROM eclipse-temurin:17.0.6_10-jdk as builder
    ARG JAR_FILE
    ADD ${JAR_FILE} app.jar
    RUN java -Djarmode=layertools -jar app.jar extract
    
    # Grant the group the same permissions as the owner user for consistency
    # Note that we do this permission change in the builder image to avoid creating large intermediary
    # layers in the final runner image.
    #
    # Note - for the most part, the COPY command below will honor the permissions set for these files in the builder
    # image. However, for any leading directories Docker has to create itself, permissions will match
    # the default umask for Docker, which is not configurable at the time of writing.
    # This is technically not an issue in most cases, since read access is always available and write
    # access to these directories is almost never necessary
    RUN chmod -R g=u "dependencies/"
    RUN chmod -R g=u "spring-boot-loader/"
    RUN chmod -R g=u "snapshot-dependencies/"
    RUN chmod -R g=u "application/"
    
    # Base image reference
    FROM ${YOUR_BASE_IMAGE}
    
    # The base image assumes 1000 UID for standard Kubernetes, and an arbitrary UID + root group (0)
    # for OpenShift. Thus, grant both of those ownership here.
    # Also, move the application files into the "/blc-app" directory (defined by the base image and used
    # by the entrypoint script).
    COPY --chown=1000:0 --from=builder dependencies/ /blc-app
    COPY --chown=1000:0 --from=builder spring-boot-loader/ /blc-app
    COPY --chown=1000:0 --from=builder snapshot-dependencies/ /blc-app
    COPY --chown=1000:0 --from=builder application/ /blc-app
    
    # ... expose any ports, etc