Cloud Operations
Production CATRancherDatadog
  • Welcome
  • Fundamental Concepts
    • Authoring
    • Reportability
    • Knowledge Modules
    • Synchronization
    • Mirth & Message Queuing
    • Glossary
  • Operations Handbook
    • Achieving Steady State
    • Leveraging Kubernetes
    • Triaging Issues
    • Escalation Policy
    • Playbooks
      • Sync-all (Redeploy) Knowledge Modules
  • Deployments
    • Methodology
    • Bill of Materials
      • AIMS Production
        • 2025
          • 2025-02-12
          • 2025-03-19
          • 2025-04-30
          • 2025-06-25
        • 2024
          • 2024-01-31
          • 2024-04-03
          • 2024-04-03 [PATCH]
          • 2024-06-05 [patched 05/31]
          • 2024-06-10 [PATCH TO 2024-06-05]
          • 2024-06-20 [PATCH TO 2024-06-05]
          • 2024-08-14
          • 2024-08-26 [PATCH TO 2024-08-14]
          • 2024-10-16
          • 2024-12-11
          • 2024-12-11 [PATCH 1]
        • 2023
          • 2023-02-22
          • 2023-02-22 Patch 01
          • 2023-05-18
          • 2023-07-26
          • 2023-10-04
          • 2023-11-15
        • 2022
          • 2022-12-14
          • 2022-11-09
          • 2022-09-28
          • 2022-08-18
          • 2022-07-13
          • 2022-06-15
          • 2022-05-16
          • 2022-05-02
          • 2022-04-27
          • 2022-04-21
          • 2022-04-14
          • 2022-04-13
          • 2022-04-12
          • 2022-03-14
          • 2022-02-16
          • 2022-1-27
          • 2022-01-24
          • 2022-01-06
        • 2021
          • 2021-12-06
          • 2021-11-29
          • 2021-11-17
      • AIMS PRR
        • 2023
          • 2023-02-22
          • 2023-02-22 Patch 01
          • 2023-05-18
          • 2023-07-26
          • 2023-10-04
          • 2023-11-15
        • 2022
          • 2022-12-14
          • 2022-11-09
          • 2022-09-28
          • 2022-08-18
          • 2022-07-13
          • 2022-06-15
          • 2022-05-16
          • 2022-05-02
          • 2022-04-27
          • 2022-04-21
          • 2022-04-14
          • 2022-04-13
          • 2022-04-12
          • 2022-01-06
        • 2021
          • 2021-12-06
          • 2021-11-29
          • 2021-11-17
        • 2024
          • 2024-01-31
          • 2024-04-03
          • 2024-04-03 [PATCH]
          • 2024-06-05 [patched 05/31]
          • 2024-06-10 [PATCH TO 2024-06-05]
          • 2024-06-20 [PATCH TO 2024-06-05]
          • 2024-08-14
          • 2024-08-26 [PATCH TO 2024-08-14]
          • 2024-10-16
          • 2024-12-11
          • 2024-12-11 [PATCH 1]
        • 2025
          • 2025-02-12
          • 2025-03-19
          • 2025-04-30
          • 2025-06-25
      • AIMS Onboarding
        • 2023
          • 2023-02-22
          • 2023-02-22 Patch 01
          • 2023-05-18
          • 2023-07-26
          • 2023-10-04
          • 2023-11-15
        • 2022
          • 2022-12-14
          • 2022-11-09
          • 2022-09-28
          • 2022-08-18
          • 2022-07-13
          • 2022-06-15
          • 2022-05-16
          • 2022-05-02
          • 2022-04-27
          • 2022-04-21
          • 2022-04-14
          • 2022-04-13
          • 2022-04-12
          • 2022-04-04
          • 2022-01-06
        • 2021
          • 2021-12-06
          • 2021-11-29
          • 2021-11-17
        • 2024
          • 2024-01-31
          • 2024-04-03
          • 2024-04-03 [PATCH]
          • 2024-06-05 [patched 05/31]
          • 2024-06-10 [PATCH TO 2024-06-05]
          • 2024-06-20 [PATCH TO 2024-06-05]
          • 2024-08-14
          • 2024-08-26 [PATCH TO 2024-08-14]
          • 2024-10-16
          • 2024-12-11
          • 2024-12-11 [PATCH 1]
        • 2025
          • 2025-02-12
          • 2025-03-19
          • 2025-04-30
          • 2025-06-25
      • Archived Environments
        • AIMS Sandbox
          • 2022-01-06
          • 2021-12-06
          • 2021-11-29
          • 2021-11-17
          • 2021-09-21
          • 2021-09-20
          • 2021-08-27
          • 2021-07-01
          • 2021-06-21
          • 2021-06-07
          • 2021-06-17
          • 2021-05-26
  • Architecture
    • Overview
    • Diagrams
      • HLN Hosted Environments
    • Data State
      • Authoring Database
      • Cloud Storage Buckets
      • Document Database
    • Kubernetes
      • Services
      • Workloads
        • Common Sidecars
          • 🚮GC Logging for Java
          • 🪢Cloud SQL Proxy
        • DSS
          • DSS Preflight Container
          • DSS Container
        • DSUS
          • DSUS Container
        • MTS
          • MTS Container
        • OUS
          • OUS Container
        • RGS
          • RGS Container
        • RRS
          • RRS Container
        • SS
          • SS Container
        • SSCS
          • SSCS Container
        • VCS
          • VCS Container
        • In-Development
          • EIS
          • FHIR
    • Web Applications
      • CAT
  • GitBook Resources
    • Workload Template
      • Container Template
Powered by GitBook
On this page
  • Release Notes / Changelog
  • Summary
  • DSS
  • DSUS
  • MTS
  • OUS
  • SS
  • SSCS
  • RGS
  • VCS
  • Bill of Materials
  1. Deployments
  2. Bill of Materials
  3. Archived Environments
  4. AIMS Sandbox

2021-11-17

Previous2021-11-29Next2021-09-21

Last updated 3 years ago

Release Notes / Changelog

Summary

Hodgepodge of k8s podspec changes for Java-based services, and minor enhancements across many of the RCKMS services in support of parallel production testing and validation.

To support garbage collector (GC) logging in Java-based services, Kubernetes Pod specifications for most of the Java-based services will need to be amended to support a new shared volume mount, a sidecar container running Busybox (to tail logs into a log capture daemon), and updated Java Options environment variables to enable this output.

While these amendments are similar, there are key differences for some of these services, by which we would recommend reviewing prior to applying to environments.

DSS

  • : validate Glassfish/Tomcat configurations are similar to existing RCKMS production values (as each container will have it's own Glassfish/Tomcat config).

To enable GC logging, please update the Kubernetes pod specification for DSS:

In the primary DSS container, add a volume mount:

volume_mount {
  name       = "java-diag"
  mount_path = "/hln/diagnostics"
}

In the primary DSS container, modify (or add) the following environment variable, keyed JAVA_OPTS with content: -Xms16g -Xmx16g -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/hln/diagnostics/garbageCollection.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=10m -XX:+UseStringDeduplication

NOTE: the Xms and Xmx values in the preceding environment variable should be less than or equal to 80% of the RAM allocated to each pod. For AIMS Sandbox, Onboard, and PRR, these values can be lower than the recommended values for production (for DSS, we recommend allocating 20GB of RAM to the primary container).

Next, add the following sidecar container specification to DSS:

        # Busybox GC Logger
        container {
          name              = "gc-logger"
          image             = "busybox:latest"
          image_pull_policy = ""
          command = [
            "/bin/sh"
          ]
          args = [
            "-c",
            "tail -F -v /hln/diagnostics/garbageCollection.log"
          ]
          resources {
            limits = {
              cpu    = "150m"
              memory = "128Mi"
            }
            requests = {
              cpu    = "150m"
              memory = "128Mi"
            }
          }
          security_context {
            allow_privilege_escalation = false
            # non-root user
            run_as_user     = 1000
            run_as_non_root = true
          }
          volume_mount {
            name       = "java-diag"
            mount_path = "/hln/diagnostics"
          }
        }

Finally, add the following volume definition to the DSS pod specification:

volume {
  name = "java-diag"
  empty_dir {
    medium = ""
  }
}

DSUS

MTS

To improve reliability of the MTS container with production datasets, please add the following environment variable key value pairs to the MTS container specification:

  • TCP_WRITE_TIMEOUT = 180000

To enable GC logging, please update the Kubernetes pod specification for MTS:

In the primary MTS container, add a volume mount:

volume_mount {
  name       = "java-diag"
  mount_path = "/hln/diagnostics"
}

In the primary MTS container, modify (or add) the following environment variable, keyed JAVA_OPTS with content -Xms8g -Xmx8g -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/hln/diagnostics/garbageCollection.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=10m -XX:+UseStringDeduplication

NOTE: the Xms and Xmx values in the preceding environment variable should be less than or equal to 80% of the RAM allocated to each pod. For AIMS Sandbox, Onboard, and PRR, these values can be lower than the recommended values for production (for MTS, we recommend allocating 10GB of RAM to the primary container).

Next, add the following sidecar container specification to MTS:

        # Busybox GC Logger
        container {
          name              = "gc-logger"
          image             = "busybox:latest"
          image_pull_policy = ""
          command = [
            "/bin/sh"
          ]
          args = [
            "-c",
            "tail -F -v /hln/diagnostics/garbageCollection.log"
          ]
          resources {
            limits = {
              cpu    = "150m"
              memory = "128Mi"
            }
            requests = {
              cpu    = "150m"
              memory = "128Mi"
            }
          }
          security_context {
            allow_privilege_escalation = false
            # non-root user
            run_as_user     = 1000
            run_as_non_root = true
          }
          volume_mount {
            name       = "java-diag"
            mount_path = "/hln/diagnostics"
          }
        }

Finally, add the following volume definition to the MTS pod specification:

volume {
  name = "java-diag"
  empty_dir {
    medium = ""
  }
}

OUS

SS

To enable support for this new diagnostic endpoint, please add the following environment variable key value pairs to the SS container specification:

  • SERVICE_BASE_DSS, which should be the fully-qualified URL to DSS within the service mesh (e.g. http://dss.svc.cluster.local)

  • SERVICE_BASE_VCS, which should be the fully-qualified URL to VCS within the service mesh (e.g. http://vcs.svc.cluster.local)

SSCS

To enable GC logging, please update the Kubernetes pod specification for SSCS:

In the primary SSCS container, add a volume mount:

volume_mount {
  name       = "java-diag"
  mount_path = "/hln/diagnostics"
}

In the primary SSCS container, modify (or add) the following environment variable, keyed JAVA_TOOL_OPTIONS with content -Xmx4g -Xms4g -XX:+UseG1GC -Xlog:gc*,gc+phases=debug:file=/hln/diagnostics/garbageCollection.log:uptime,utctime,level,tags,pid,hostname:filesize=10m,filecount=1 -XX:+UseStringDeduplication

NOTE: the Xms and Xmx values in the preceding environment variable should be less than or equal to 80% of the RAM allocated to each pod. For AIMS Sandbox, Onboard, and PRR, these values can be lower than the recommended values for production (for SSCS, we recommend allocating 5GB of RAM to the primary container).

Next, add the following sidecar container specification to SSCS:

        # Busybox GC Logger
        container {
          name              = "gc-logger"
          image             = "busybox:latest"
          image_pull_policy = ""
          command = [
            "/bin/sh"
          ]
          args = [
            "-c",
            "tail -F -v /hln/diagnostics/garbageCollection.log"
          ]
          resources {
            limits = {
              cpu    = "150m"
              memory = "128Mi"
            }
            requests = {
              cpu    = "150m"
              memory = "128Mi"
            }
          }
          security_context {
            allow_privilege_escalation = false
            # non-root user
            run_as_user     = 1000
            run_as_non_root = true
          }
          volume_mount {
            name       = "java-diag"
            mount_path = "/hln/diagnostics"
          }
        }

Finally, add the following volume definition to the SSCS pod specification:

volume {
  name = "java-diag"
  empty_dir {
    medium = ""
  }
}

RGS

To enable GC logging, please update the Kubernetes pod specification for RGS:

In the primary RGS container, add a volume mount:

volume_mount {
  name       = "java-diag"
  mount_path = "/hln/diagnostics"
}

In the primary RGS container, modify (or add) the following environment variable, keyed JAVA_TOOL_OPTIONS with content -Xmx4g -Xms4g -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/hln/diagnostics/garbageCollection.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=10m -XX:+UseStringDeduplication

NOTE: the Xms and Xmx values in the preceding environment variable should be less than or equal to 80% of the RAM allocated to each pod. For AIMS Sandbox, Onboard, and PRR, these values can be lower than the recommended values for production (for RGS, we recommend allocating 5GB of RAM to the primary container).

Next, add the following sidecar container specification to RGS:

        # Busybox GC Logger
        container {
          name              = "gc-logger"
          image             = "busybox:latest"
          image_pull_policy = ""
          command = [
            "/bin/sh"
          ]
          args = [
            "-c",
            "tail -F -v /hln/diagnostics/garbageCollection.log"
          ]
          resources {
            limits = {
              cpu    = "150m"
              memory = "128Mi"
            }
            requests = {
              cpu    = "150m"
              memory = "128Mi"
            }
          }
          security_context {
            allow_privilege_escalation = false
            # non-root user
            run_as_user     = 1000
            run_as_non_root = true
          }
          volume_mount {
            name       = "java-diag"
            mount_path = "/hln/diagnostics"
          }
        }

Finally, add the following volume definition to the RGS pod specification:

volume {
  name = "java-diag"
  empty_dir {
    medium = ""
  }
}

VCS

To enable GC logging, please update the Kubernetes pod specification for VCS:

In the primary VCS container, add a volume mount:

volume_mount {
  name       = "java-diag"
  mount_path = "/hln/diagnostics"
}

In the primary VCS container, modify (or add) the following environment variable, keyed JAVA_TOOL_OPTIONS with content -Xmx4g -Xms4g -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/hln/diagnostics/garbageCollection.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=10m -XX:+UseStringDeduplication

NOTE: the Xms and Xmx values in the preceding environment variable should be less than or equal to 80% of the RAM allocated to each pod. For AIMS Sandbox, Onboard, and PRR, these values can be lower than the recommended values for production (for VCS, we recommend allocating 5GB of RAM to the primary container).

Next, add the following sidecar container specification to VCS:

        # Busybox GC Logger
        container {
          name              = "gc-logger"
          image             = "busybox:latest"
          image_pull_policy = ""
          command = [
            "/bin/sh"
          ]
          args = [
            "-c",
            "tail -F -v /hln/diagnostics/garbageCollection.log"
          ]
          resources {
            limits = {
              cpu    = "150m"
              memory = "128Mi"
            }
            requests = {
              cpu    = "150m"
              memory = "128Mi"
            }
          }
          security_context {
            allow_privilege_escalation = false
            # non-root user
            run_as_user     = 1000
            run_as_non_root = true
          }
          volume_mount {
            name       = "java-diag"
            mount_path = "/hln/diagnostics"
          }
        }

Finally, add the following volume definition to the VCS pod specification:

volume {
  name = "java-diag"
  empty_dir {
    medium = ""
  }
}

Bill of Materials

Changes to component SHA1 / Tag values (indicating a release) are marked as emboldened line entries. Components link to their respective documentation, and tags link to the Github repository release for that individual component.

Component

Shortname

SHA1

Tag

CAT

4b422d3

DSUS

ca7d23a

DSS

9a74c55

dss-preflight-container

DSS-PFC

eb564a1

MTS

3cf5306

OUS

3525806

RRS

f11c8be

RGS

463fd9d

SS

88aa151

SSCS

10716c6

VCS

cbe2cce

: Add MongoDB connectivity check to k8s probe endpoints.

: validate Glassfish/Tomcat configurations are similar to existing RCKMS production values (as each container will have it's own Glassfish/Tomcat config).

: Add MongoDB connectivity check to k8s probe endpoints.

: Thinning of the logs.

: add operator diagnostic endpoint (accessible at GET /__/diagnostics) to validate SS configuration.

: add support for serviceResponseTime configuration variable.

: add diagnostic endpoint to RGS.

: Modify the order of Predicates so that embedded Concepts (such as Entity) are searched for at the proper time in Drools aligned with the vMR XPath.

: When wrapped by a function, an embedded Predicate Group that follows a Predicate is not properly joining. For example, the below ObservationValues are not joining to the ObservationFocus

: validate Glassfish/Tomcat configurations are similar to existing RCKMS production values (as each container will have it's own Glassfish/Tomcat config).

: validate Glassfish/Tomcat configurations are similar to existing RCKMS production values (as each container will have it's own Glassfish/Tomcat config).

RCKMSDEV-452
RCKMSDEV-530
RCKMSDEV-452
RCKMSDEV-530
RCKMSDEV-531
RCKMSDEV-525
RCKMSDEV-344
RCKMSDEV-526
RCKMSDEV-508
RCKMSDEV-509
RCKMSDEV-452
RCKMSDEV-452
data-support-update-service
decision-support-service
middle-tier-service
opencds-update-service
rckms-reports-service
rules-generation-service
shared-service
ss-comparison-service
vmr-converter-service
cat-rckms
2.11.1
1.6.2
2.3.0
1.1.1
2.4.0
1.6.0
1.1.0
2.0.16
1.6.0
1.2.0
2.3.0