Deploying ScanCentral SAST in Kubernetes

This document describes how to configure and use the scancentral-sast 25.2 Helm chart for ScanCentral SAST components in Kubernetes. You can find the ScanCentral SAST Helm chart at https://hub.docker.com/r/fortifydocker/helm-scancentral-sast.

Table of contents

Kubernetes versions

These charts were tested using the following Kubernetes versions:

Tool prerequisites

OpenText recommends that you use the same tool versions to avoid unpredictable results.

Installation

The following instructions are for example purposes and are based on a default Kubernetes environment running under Linux, using the default namespace. Windows systems might require different syntax for certain commands and other Kubernetes Cluster providers might require additional/different configurations. Your Kubernetes administrator may require the use of specific namespaces and/or other configuration adjustments.

Installation prerequisites

Installation steps

  1. Creating an image pull secret
  2. Creating certificates and required Kubernetes secrets
  3. Installing ScanCentral SAST components
  4. Verifying that the ScanCentral SAST API is available
  5. Integrating ScanCentral SAST with Fortify Software Security Center
  6. Special Considerations for testing environments

Creating an image pull secret

By default, the Fortify ScanCentral SAST components Helm chart references its images directly from DockerHub. For Kubernetes to properly install your images using the default configuration, you must create an image pull secret and store it in your installation namespace in Kubernetes. If you are replicating these images to a local repository, you can skip this task and update the relevant image values in the Helm chart to reference your local repository. To create an image pull secret:

Creating a secret to store ScanCentral client authentication secrets

ScanCentral SAST requires a secret to store the ScanCentral client authentication secrets. The secret name is specified in the values file.

Creating certificates and required Kubernetes secrets

ScanCentral SAST only works over HTTPS and you must provide valid certificate stores. The following instructions use default secret names from values.yaml, which you can rename if necessary. Make sure the following conditions are met:

  1. The Controller requires a keystore that is mounted using Kubernetes secrets which contain a server certificate.
  2. Sensors require a truststore that is mounted using Kubernetes secrets to validate the Controller certificate
  3. The Controller requires a truststore that is mounted using Kubernetes secrets to validate the Fortify Software Security Center certificate.

Generating a keystore and truststore with a self-signed CA

Note: If you already have a keystore and trustore provided by your organization, ignore this section.

Generate CA trustore and keys:

openssl genpkey -algorithm RSA -out ca.key -pkeyopt rsa_keygen_bits:4096
openssl req -x509 -new -nodes -key ca.key -sha256 -days 365 -out ca.crt -subj "/C=MyCountry/ST=MyState/L=MyCity/O=MyCompany/CN=mycompany.com"

Ensure that the country code is a valid two-letter ISO 3166-1 alpha-2 code. For example, US for the United States, GB for the United Kingdom, etc.

Generate a private key for the Controller:

openssl genpkey -algorithm RSA -out server.key -pkeyopt rsa_keygen_bits:2048

Create a Certificate Signing Request (CSR):

openssl req -new -key server.key -out server.csr -subj "/C=MyCountry/ST=MyState/L=MyCity/O=MyCompany/OU=MyDepartment/CN=sast-25-2-scancentral-sast-controller"

The 'sast-25-2' prefix in the CN is a service name in Kubernetes used to install the Helm chart. If you use a different name, the resolved name in Kubernetes will be different. The value template is: <service-name>-scancentral-sast-controller Ensure that the country code is a valid two-letter ISO 3166-1 alpha-2 code. For example, US for the United States, GB for the United Kingdom, etc. If additional DNS names are required, create a server.ext file. For example:

tee server.ext > /dev/null <<EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1 = sast-25-2-scancentral-sast-controller.default.svc.cluster.local
DNS.2 = sast-25-2-scancentral-sast-controller
IP.1 = 127.0.0.1
EOF

The 'sast-25-2' prefix in the DNS is a service name in Kubernetes used to install the Helm chart. If you use a different name, the resolved name in Kubernetes will be different. The value template is <service-name>-scancentral-sast-controller.default.svc.cluster.local.

Sign the server certificate with the CA:

openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt -days 365 -sha256 -extfile server.ext

Convert the server certificate into a Java keystore, which is used by the Controller:

openssl pkcs12 -export -in server.crt -inkey server.key -CAfile ca.crt -chain -out keystore.p12 -name ctrlKey -password pass:123456
keytool -importkeystore -srckeystore keystore.p12 -srcstoretype PKCS12 -destkeystore keystore.jks -deststoretype JKS -srcstorepass 123456 -deststorepass 123456

Generate a truststore with the CA, which is used by sensors for the connection to the Controller:

keytool -import -trustcacerts -file ca.crt -alias sastCA -keystore truststore.jks -storepass 123456

Creating secrets for the helm chart.

In the examples below, keystore.jks is a keystore used by the Controller, truststore.jks is a truststore to validate both the Controller and Fortify Software Security Center certificates.

Create secrets for the Controller:

kubectl create secret generic sc-sast-server-certificate-keystore --from-file=keystore=keystore.jks
kubectl create secret generic sc-sast-server-certificate-keystore-password --from-literal=password=123456

kubectl create secret generic sc-sast-server-certificate-key-password --from-literal=password=123456
kubectl create secret generic sc-sast-server-certificate-key-alias --from-literal=alias=ctrlKey

kubectl create secret generic sc-sast-server-truststore --from-file=truststore=truststore.jks
kubectl create secret generic sc-sast-server-truststore-password --from-literal=password=123456

Create secrets for sensors:

kubectl create secret generic sc-sast-sensor-truststore --from-file=truststore=truststore.jks
kubectl create secret generic sc-sast-sensor-truststore-password --from-literal=password=123456

Installing ScanCentral SAST components

The following command installs the Controller and sensor (on linux by default) using the recommended defaults. In some cases, you might need to customize these values using the '--set' parameter or by creating a 'values.yaml' and passing it to the command line with '-f' flag. For more information about the values you can override, see the Helm Chart Values table.

Tip: To find the available Fortify ScanCentral SAST 25.2 Helm chart versions, go to https://hub.docker.com/r/fortifydocker/helm-scancentral-sast/tags.

  1. Use the following Helm commands to perform the installation:

    Note: The trustedCertificates property is deprecated. OpenText recommends that you provide trusted certificates as follows:

    helm install <release-name> oci://registry-1.docker.io/fortifydocker/helm-scancentral-sast --version <sc-sast-helm-chart-version> \
      --set secrets.secretName=scancentral-chart-secret \
      --set controller.sscUrl="https://<ssc-svc-name>.<namespace>.svc.cluster.local/ssc"
  2. Verify that your ScanCentral SAST pods are running successfully. It might take a few minutes before your pod gets to a proper 1/1 Running state. You can run the following command multiple times or use the flag -w to watch for any changes.

    kubectl get pods

Verifying that the ScanCentral SAST API is available

  1. Open a new terminal shell.

  2. To access your ScanCentral SAST endpoint, set up port forwarding through kubectl.

    For example, to forward the ScanCentral SAST service on localhost port 8443:

    kubectl port-forward svc/<sc-sast-service-name> 8443:443

    You can determine the <sc-sast-service-name> by using the following command to list the Kubernetes services:

    kubectl get services
  3. Verify that you can access the ScanCentral SAST API endpoint with your browser.

    https://localhost:8443/scancentral-ctrl/
    

Integrating ScanCentral SAST with Fortify Software Security Center

Configure your ScanCentral SAST Service URL in Fortify Software Security Center by performing following the steps:

  1. Access the Fortify Software Security Center webpage by opening up a new terminal shell and setting up port forward on the Software Security Center service through kubectl. For example, to forward the Software Security Center service on localhost port 8081:

    kubectl port-forward svc/<ssc-service-name> 8081:443

    You can determine the <ssc-service-name> by using the following command to list the Kubernetes services:

    kubectl get services
  2. Access the Software Security Center web application from your browser.

    https://localhost:8081
  3. After you login to Software Security Center, select the Administration view.

  4. Expand Configuration, and then select ScanCentral SAST.

  5. Specify the ScanCentral SAST Controller URL, and the SSC and ScanCentral Controller shared secret. Follow the below steps to get the secret.

    kubectl get secret <scancentral-sast-secret-name> -o jsonpath="{.data.scancentral-ssc-scancentral-ctrl-secret}" | base64 -d

    You can determine the 'scancentral-sast-secret-name' by using the following command to list the Kubernetes secrets:

    kubectl get secrets
  6. Restart the Software Security Center pod (managed by statefulset) to effect the change by running the following command:

    kubectl rollout restart statefulset <ssc-statefulset-name>

    You can determine the 'ssc-statefulset-name' by using the following command to list the Kubernetes statefulsets:

    kubectl get statefulsets
  7. Restarting the Software Security Center statefulset causes its service port forwarding to be interrupted. See step 1 to configure port forwarding again.

  8. Login back to Software Security Center. You should see ScanCentral view on the header and you can access the SAST page from that view.

  9. See the Software Security Center and ScanCentral SAST documentation to submit a scan requests.

Special Considerations for testing environments:

By default, the Helm chart defines the container resource/requests based on recommended best-practice values intended to prevent performance issues and unexpected Kubernetes evictions of containers and pods. These values are often too large for a small test environment that does not require the same level of resources.

To disable these settings, paste the below values into a file called "resource_override.yaml" and add it to the install commandline with the -f flag. (e.g. -f resource_override.yaml")

WARNING: Using the following settings in production is not supported and will lead to unstable behaviors.
controller:
  resources:
    requests:
      cpu: null
      memory: null
    limits:
      cpu: null
      memory: null
workers:
  linux:
    resources:
      requests:
        cpu: null
        memory: null
      limits:
        cpu: null
        memory: null

Upgrading

Upgrade notes

The current release of the ScanCentral SAST helm chart requires some additional steps in order to run the chart. In addition, the workers section was modified to add the new default windows sensors, which are disabled by default: windows2022 and windows2019.

Preparing for the upgrade

Follow the steps from Installation: Creating certificates and required Kubernetes secrets

Performing the upgrade

  1. Create a new values file if neccessary (for example new-values.yaml).

  2. Identify the previous ScanCentral SAST release deployment. Use the following command to find it.

    helm list
  3. After you identify the previous ScanCentral SAST release, upgrade the Helm chart. Omit the -f new-values.yaml if not needed.

    helm upgrade <release-name> oci://registry-1.docker.io/fortifydocker/helm-scancentral-sast --version <sc-sast-helm-chart-version> \
     --set-file secrets.fortifyLicense=<fortify.license> \
     --set controller.sscUrl="https://<ssc-svc-name>.<namespace>.svc.cluster.local/ssc" \
     -f new-values.yaml
  4. Verify that your ScanCentral SAST pods are running successfully. It might take a few minutes before your pod gets to a proper 1/1 Running state. You can run the following command multiple times or use the flag -w to watch for any changes.

    kubectl get pods

Verifying the ScanCentral SAST API is available

Follow the steps from Installation: Verifying the ScanCentral SAST API is available

Troubleshooting

If you encounter issues with the helm upgrade command while using the default Persistent Volume Claim (PVC) instead of providing the controller.persistence.existingClaim value, you might see an error like this:

Error: UPGRADE FAILED: cannot patch "data-sast-25-2-scancentral-sast-controller" with kind PersistentVolumeClaim: PersistentVolumeClaim "data-sast-25-2-scancentral-sast-controller" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests and volumeAttributesClassName for bound claims
  core.PersistentVolumeClaimSpec{
    ... // 2 identical fields
    Resources:        {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}},
    VolumeName:       "pvc-ccd73628-6d62-4189-9bff-1786c61b0d46",
-   StorageClassName: &"standard",
+   StorageClassName: nil,
    VolumeMode:       &"Filesystem",
    DataSource:       nil,
    ... // 2 identical fields
  }

To fix this issue, you need to specify a value for controller.persistence.storageClass. First, identify the storage class for your PVC by running:

kubectl get pvc

Look for the STORAGECLASS value associated with the PVC created for the Controller. Then, you can either explicitly specify this storage class name in your custom values.yaml file:

controller:
  persistence:
    storageClass: "<class_name>"

Alternatively, you can set it using the helm upgrade command:

helm upgrade --set controller.persistence.storageClass="<class_name>" --reuse-values <release_name>

Values

The following values are exposed by the Helm chart. Unless specified as Required, values should only be overridden as made necessary by your specific environment.

Required

Key Type Default Description
controller.sscUrl string "" Specify the URL of Software Security Center
secrets.fortifyLicense Required if secrets.secretName is blank "" fortify.license file contents. (Tip) Use "--set-file=secrets.fortifyLicense=<FORTIFY_LICENSE_PATH>" option when running a Helm install or upgrade.

Other Values

Key Type Default Description
controller.accessLogMaxDays int 7 Specifies the number of days that Tomcat access logs are retained.
controller.additionalEnvironment list [] Defines any additional environment variables to add to the resulting pod.
controller.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes Pod(s).
controller.allowedTlsProtocols string "TLSv1.3+TLSv1.2" Specifies the allowed TLS protocols for the server
controller.archivedLogsNumber int 20 Specifies the maximum number of archived log files to retain
controller.enabled bool true Specifies whether to deploy the Controller component.
controller.honorCiphersOrder bool false Specifies whether to honor the TLS cipher order
controller.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
controller.image.repository string "fortifydocker/scancentral-sast-controller" Specifies the Docker repository from where to pull docker image.
controller.image.tag string "25.2.0" Specifies the version of the docker image to pull.
controller.ingress.annotations object {} Specifies ingress Annotations.
controller.ingress.className string "" Specifies the ingress class.
controller.ingress.enabled bool false Specifies whether to enable the ingress.
controller.ingress.hosts list [{"host":"scancentral-sast-controller.local","paths":[{"path":"/","pathType":"Prefix"}]}] Defines ingress Host configurations.
controller.ingress.tls list [{"hosts":["some-host"],"secretName":"some-name"}] Defines ingress TLS configurations. The default shows example configuration values. The actual default is [].
controller.logFileSize string "20MB" Specifies the maximum size of the ScanCentral Controller log file before it is archived and a new one is created
controller.logLevel string "info" Specifies the log level for ScanCentral Controller
controller.nodeSelector pod.nodeSelector kubernetes.io/os: linux Defines Node selection constraint configurations to add to resulting Kubernetes Pod(s).
controller.persistence.accessMode string "ReadWriteOnce" Persistent Volume access mode (DEPRECATED: use persistence.accessModes instead) Used when 'existingClaim' is not defined. Should generally not be modified.
controller.persistence.accessModes list ["ReadWriteOnce"] Persistent volume access modes. Used when 'existingClaim' is not defined. Should generally not be modified.
controller.persistence.annotations object {} Persistent Volume Claim annotations. Used when 'existingClaim' is not defined.
controller.persistence.enabled bool true Specifies whether to persist controller data across pod reboots. 'false' is not supported in production environments.
controller.persistence.existingClaim string "" Provides a pre-configured Kubernetes PersistentVolumeClaim. This is the recommended approach to use in production.
controller.persistence.labels object {} Persistent Volume Claim labels. Used when 'existingClaim' is not defined.
controller.persistence.selector object {} Specifies the selector to match an existing Persistent Volume. Used when 'existingClaim' is not defined.
controller.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
controller.persistence.storageClass string "" Specifies the Persistent Volume storage class, used when 'existingClaim' is not defined. If defined, storageClassName: . If set to "-", storageClassName: "", which disables dynamic provisioning. If undefined (the default) or set to null, no storageClassName spec is set, choosing the default provisioner ** Warning: Due to the risk of data loss on application uninstall, OpenText strongly recommends against utilizing directly configuring the storageClass and related options in a production environment. This mechanism will be removed in a future update.
controller.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes Pod(s).
controller.podLabels pod.labels {} Defines labels to add to resulting Kubernetes Pod(s).
controller.resources object {"limits":{"cpu":1,"memory":"8Gi"},"requests":{"cpu":1,"memory":"8Gi"}} Specifies resource requests (guaranteed resources) and limits for the pod. By default, these are configured to the minimum hardware requirements as described by the ScanCentral SAST documentation. Your workload might require larger values. Consult the documentation and ensure these values are appropriate for your environment.
controller.serverCertificateKeyAliasKey string "alias" Specifies the key in the Kubernetes secret to store the alias of a server certificate key in a keystore
controller.serverCertificateKeyAliasSecret string "sc-sast-server-certificate-key-alias" Specifies the Kubernetes secret to store the alias of a server certificate key in a keystore
controller.serverCertificateKeyPasswordKey string "password" Specifies the key in the Kubernetes secret to store the password for a server certificate key in a keystore
controller.serverCertificateKeyPasswordSecret string "sc-sast-server-certificate-key-password" Specifies the Kubernetes secret to store the password for a server certificate key in a keystore
controller.serverCertificateKeystoreKey string "keystore" Specifies the key in the Kubernetes secret to store the server keystore with a server certificate
controller.serverCertificateKeystorePasswordKey string "password" Specifies the key in the Kubernetes secret to store the password for a keystore with a server certificate
controller.serverCertificateKeystorePasswordSecret string "sc-sast-server-certificate-keystore-password" Specifies the Kubernetes secret to store the password for a keystore with a server certificate
controller.serverCertificateKeystoreSecret string "sc-sast-server-certificate-keystore" Specifies the Kubernetes secret to store the server keystore with a server certificate
controller.service.port int 443 Specifies the port for the K8s service.
controller.service.type string "ClusterIP" Specifies which Kubernetes service type to use for Controller (ClusterIP, NodeIP, or LoadBalancer)
controller.sscRemoteIp string "0.0.0.0/0" Specifies the allowed remote IP address for Software Security Center. Only requests with a matching remote IP address are allowed. The default IP address is resolved from ssc_url. Set this value if the Controller accesses Software Security Center via a reverse proxy server. This value can be a comma separated IP addresses or CIDR network ranges.
controller.thisUrl string "" Specifies the URL of the ScanCentral SAST Controller used to provide artifacts from the Controller.
controller.tlsCiphers string "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" Specifies the TLS ciphers
controller.tolerations pod.tolerations [] Specifies Toleration configurations to add to the resulting Kubernetes Pod(s).
controller.tomcatCatalinaLogsMaxDays int 14 Specifies the 1catalina.org.apache.juli.AsyncFileHandler.maxDays property in Catalina logging.properties
controller.tomcatHostManagerLogsMaxDays int 14 Specifies the 4host-manager.org.apache.juli.AsyncFileHandler.maxDays property in Catalina logging.properties
controller.tomcatLocalhostLogsMaxDays int 14 Specifies the 2localhost.org.apache.juli.AsyncFileHandler.maxDayss property in Catalina logging.properties
controller.tomcatManagerLogsMaxDays int 14 Specifies the 3manager.org.apache.juli.AsyncFileHandler.maxDays property in Catalina logging.properties
controller.truststoreSecret string "sc-sast-server-truststore" Specifies the Kubernetes secret to store truststore for certificate validation
controller.truststoreSecretKey string "truststore" Specifies a key in the Kubernetes secret to store truststore for certificate validation
controller.truststoreSecretPassword string "sc-sast-server-truststore-password" Specifies the Kubernetes secret to store a password for the truststore to validate certificates
controller.truststoreSecretPasswordKey string "password" Specifies a key in the Kubernetes secret to store a password for the truststore to validate certificates
customResources object {"enabled":false,"resources":{}} Defines Kubernetes resources to be installed and configured as part of the Helm chart. If you provide any resources, you must provide them as quoted, using single quotation marks '.
customResources.enabled bool false Indicates whether to enable custom resource creation.
customResources.resources Kubernetes YAML {} Specifies the custom resources to generate.
dbMigration.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
dbMigration.image.repository string "fortifydocker/scancentral-sast-db-migration" Specifies the Docker repository from where to pull docker image.
dbMigration.image.tag string "25.2.0" Specifies the version of the docker image to pull.
dbMigration.resources object {"limits":{"cpu":1,"memory":"8Gi"},"requests":{"cpu":1,"memory":"8Gi"}} Specifies resource requests (guaranteed resources) and limits for the pod. By default, these are configured for the minimum hardware requirements as described in the ScanCentral SAST documentation. Your workload might require larger values. Consult the documentation to ensure these values are appropriate for your environment.
fullnameOverride string .Release.name Overrides the fully qualified app name of the release.
imagePullSecrets list [] Specifies a secret in the same namespace to use for pulling any of the images used by the current release. You must provide this if pulling images directly from DockerHub.
nameOverride string .Chart.name Overrides the name of this chart.
secrets.clientAuthToken string "" Kubernetes secret that stores the authentication token clients use to connect to the Controller.
secrets.secretName string "" Specifies a Kubernetes secret containing ScanCentral SAST sensitive data. If empty, a secret is created automatically using sensitive properties. If not empty, the existing secret referenced by "secretName" is used and sensitive entries in this section are ignored
secrets.sscScanCentralCtrlSecret string "" Secret that stores authentication credentials to Software Security Center.
secrets.workerAuthToken string "" Kubernetes secret that stores the authentication token sensors used to connect to the Controller.
trustedCertificates list [] Specifies a list of certificates in PEM format to add to the Controller and sensor truststore (DEPRECATED: Mount the truststore to the Controller using serverTruststoreSecret instead. Mount the truststore to a sensor using worker..truststoreSecret instead). (Tip) Use "--set-file=trustedCertificates[]=" option when running helm install or upgrade. Example: --set-file=trustedCertificates[0]=cert0.crt --set-file=trustedCertificates[1]=cert1.crt
workers.linux.additionalEnvironment list [] Defines any additional environment variables to add to the resulting pod.
workers.linux.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes Pod(s).
workers.linux.autoUpdate.enabled bool true Specifies whether to update Rulepacks on the sensor prior to starting.
workers.linux.autoUpdate.locale string "en" Specifies the Rulepack locale.
workers.linux.autoUpdate.proxy.host string nil FQDN for the proxy to autoupdate. Not a URL.
workers.linux.autoUpdate.proxy.password string nil Autoupdate proxy password
workers.linux.autoUpdate.proxy.port string nil Autoupdate Proxy server port
workers.linux.autoUpdate.proxy.username string nil Autoupdate proxy username
workers.linux.autoUpdate.server.acceptKey bool false Automatically accept the update server's public key.
workers.linux.autoUpdate.server.acceptSslCertificate bool false Automatically accept the update server's SSL certificate public key.
workers.linux.autoUpdate.server.url string nil Specifies the URL of the update server. Leave empty to set to the default for the Fortify update server
workers.linux.controllerProxyHost string nil Specifies a proxy host used to connect to the Controller.
workers.linux.controllerProxyPassword string nil Specifies the proxy password.
workers.linux.controllerProxyPort string nil Specifies the proxy host port used to connect to the Controller.
workers.linux.controllerProxyUser string nil Specifies the username used to connect to the proxy.
workers.linux.controllerUrl string nil Specifies the Controller URL. If empty, it is configured automatically based on the endpoint of the Controller installed by the chart. If the chart's Controller is disabled, this property is required.
workers.linux.enabled bool true Allows to enable/disable this component
workers.linux.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
workers.linux.image.repository string "fortifydocker/scancentral-sast-sensor" Specifies the Docker Repository name from where to pull the Docker image.
workers.linux.image.tag string "25.2.0" Specifies the version of the docker image to pull.
workers.linux.nodeSelector pod.nodeSelector kubernetes.io/os: linux Defines Node selection constraint configurations to add to resulting Kubernetes Pod(s).
workers.linux.os string "linux" Specifies the sensor operating system (linux/windows).
workers.linux.persistence.accessMode string "ReadWriteOnce"
workers.linux.persistence.accessModes[0] string "ReadWriteOnce"
workers.linux.persistence.annotations object {}
workers.linux.persistence.enabled bool false Specifies whether to use an external persistent store for the temporary worker data. Using this option in production is not recommended and will be removed in a future release.
workers.linux.persistence.selector object {}
workers.linux.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
workers.linux.persistence.storageClass string ""
workers.linux.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes Pod(s).
workers.linux.podLabels pod.labels {} Defines labels to add to resulting Kubernetes Pod(s).
workers.linux.replicas int 1 Number of replicas
workers.linux.resources object {"limits":{"cpu":8,"memory":"32Gi"},"requests":{"cpu":8,"memory":"32Gi"}} Resource requests (guaranteed resources) and limits for the pod. The default values are based on a generalized baseline and should be adjusted to the correct size based on the sizing calculations in the ScanCentral SAST documentation.
workers.linux.restapiConnectTimeout int 10000
workers.linux.restapiReadTimeout int 30000
workers.linux.scanTimeout string nil Timeout (in minutes) for a scan. When set, scans that exceed the timeout value are canceled
workers.linux.sscProxyHost string nil Proxy host for connecting to Software Security Center.
workers.linux.sscProxyPassword string nil Specifies the proxy password.
workers.linux.sscProxyPort string nil Specifies the proxy host port used to connect to Software Security Center.
workers.linux.sscProxyUser string nil Specifies the username used to connect to the proxy
workers.linux.tolerations pod.tolerations [] Defines Toleration configurations to add to resulting Kubernetes Pod(s).
workers.linux.topologySpreadConstraints list [] Implementing spread constraints can be used to balance load and manage "noisy neighbor" scenarios.
workers.linux.truststoreSecret string "sc-sast-sensor-truststore" Specifies the Kubernetes secret to store the truststore for server certificate validation
workers.linux.truststoreSecretKey string "truststore" Specifies a key in the Kubernetes secret to store the truststore for server certificate validation
workers.linux.truststoreSecretPassword string "sc-sast-sensor-truststore-password" Specifies the Kubernetes secret to store a password so that a truststore can validate server certificates
workers.linux.truststoreSecretPasswordKey string "password" Specifies a key in the Kubernetes secret to store a password so that a truststore can validate server certificates
workers.linux.uuidDerivedFromPodName bool true Specifies whether to assign UUIDs to sensors based on the namespace and pod name. Sensors will have the same UUID on restarts even if persistence is disabled.
workers.linux.workerCleanupAge int 168 Specifies the sensor cleanup age.
workers.linux.workerCleanupInterval int 1 Specifies the sensor cleanup interval.
workers.windows2019.additionalEnvironment list [] Defines any additional environment variables to add to the resulting pod.
workers.windows2019.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes Pod(s).
workers.windows2019.autoUpdate.enabled bool true Specifies whether to update Rulepacks on the sensor prior to starting.
workers.windows2019.autoUpdate.locale string "en" Specifies the Rulepack locale.
workers.windows2019.autoUpdate.proxy.host string nil FQDN for the proxy to autoupdate. Not a URL.
workers.windows2019.autoUpdate.proxy.password string nil Autoupdate proxy password
workers.windows2019.autoUpdate.proxy.port string nil Autoupdate Proxy server port
workers.windows2019.autoUpdate.proxy.username string nil Autoupdate proxy username
workers.windows2019.autoUpdate.server.acceptKey bool false Automatically accept the update server's public key.
workers.windows2019.autoUpdate.server.acceptSslCertificate bool false Automatically accept the update server's SSL certificate public key.
workers.windows2019.autoUpdate.server.url string nil Specifies the URL of the update server. Leave empty to set to the default for the Fortify update server
workers.windows2019.controllerProxyHost string nil Specifies a proxy host used to connect to the Controller.
workers.windows2019.controllerProxyPassword string nil Specifies the proxy password.
workers.windows2019.controllerProxyPort string nil Specifies the proxy host port used to connect to the Controller.
workers.windows2019.controllerProxyUser string nil Specifies the username used to connect to the proxy.
workers.windows2019.controllerUrl string nil Specifies the Controller URL. If empty, it is configured automatically based on the endpoint of the Controller installed by the chart. If the chart's Controller is disabled, this property is required.
workers.windows2019.enabled bool false Allows to enable/disable this component
workers.windows2019.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
workers.windows2019.image.repository string "fortifydocker/scancentral-sast-sensor-windows" Specifies the Docker Repository name from where to pull the Docker image.
workers.windows2019.image.tag string "25.2.0-windowsservercore-ltsc2019" Specifies the version of the docker image to pull.
workers.windows2019.nodeSelector pod.nodeSelector kubernetes.io/os: windows Defines Node selection constraint configurations to add to resulting Kubernetes Pod(s).
workers.windows2019.os string "windows" Specifies the sensor operating system (linux/windows).
workers.windows2019.persistence.accessMode string "ReadWriteOnce"
workers.windows2019.persistence.accessModes[0] string "ReadWriteOnce"
workers.windows2019.persistence.annotations object {}
workers.windows2019.persistence.enabled bool false Specifies whether to use an external persistent store for the temporary worker data. Using this option in production is not recommended and will be removed in a future release.
workers.windows2019.persistence.selector object {}
workers.windows2019.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
workers.windows2019.persistence.storageClass string ""
workers.windows2019.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes Pod(s).
workers.windows2019.podLabels pod.labels {} Defines labels to add to resulting Kubernetes Pod(s).
workers.windows2019.replicas int 1 Number of replicas
workers.windows2019.resources object {"limits":{"cpu":8,"memory":"32Gi"},"requests":{"cpu":8,"memory":"32Gi"}} Resource requests (guaranteed resources) and limits for the pod. The default values are based on a generalized baseline and should be adjusted to the correct size based on the sizing calculations in the ScanCentral SAST documentation.
workers.windows2019.restapiConnectTimeout int 10000
workers.windows2019.restapiReadTimeout int 30000
workers.windows2019.scanTimeout string nil Timeout (minutes) for a scan. When set, scans that exceed the timeout value are cancelled.
workers.windows2019.sscProxyHost string nil Proxy host for connecting to Software Security Center.
workers.windows2019.sscProxyPassword string nil Specifies the proxy password.
workers.windows2019.sscProxyPort string nil Specifies the proxy host port used to connect to Software Security Center.
workers.windows2019.sscProxyUser string nil Specifies the username used to connect to the proxy
workers.windows2019.tolerations pod.tolerations [{"effect":"NoSchedule","key":"os","operator":"Equal","value":"windows"}] Defines Toleration configurations to add to resulting Kubernetes Pod(s).
workers.windows2019.topologySpreadConstraints list [] Implementing spread constraints can be used to balance load and manage "noisy neighbor" scenarios.
workers.windows2019.truststoreSecret string "sc-sast-sensor-truststore" Specifies the Kubernetes secret to store the truststore for server certificate validation
workers.windows2019.truststoreSecretKey string "truststore" Specifies a key in the Kubernetes secret to store the truststore for server certificate validation
workers.windows2019.truststoreSecretPassword string "sc-sast-sensor-truststore-password" Specifies the Kubernetes secret to store a password so that a truststore can validate server certificates
workers.windows2019.truststoreSecretPasswordKey string "password" Specifies a key in the Kubernetes secret to store a password so that a truststore can validate server certificates
workers.windows2019.uuidDerivedFromPodName bool true Specifies whether to assign UUIDs to sensors based on the namespace and pod name. Sensors will have the same UUID on restarts even if persistence is disabled.
workers.windows2019.workerCleanupAge int 168 Specifies the sensor cleanup age.
workers.windows2019.workerCleanupInterval int 1 Specifies the sensor cleanup interval.
workers.windows2022.additionalEnvironment list [] Defines any additional environment variables to add to the resulting pod.
workers.windows2022.affinity pod.affinity {} Defines Node Affinity configurations to add to resulting Kubernetes Pod(s).
workers.windows2022.autoUpdate.enabled bool true Specifies whether to update Rulepacks on the sensor prior to starting.
workers.windows2022.autoUpdate.locale string "en" Specifies the Rulepack locale.
workers.windows2022.autoUpdate.proxy.host string nil FQDN for the proxy to autoupdate. Not a URL.
workers.windows2022.autoUpdate.proxy.password string nil Autoupdate proxy password
workers.windows2022.autoUpdate.proxy.port string nil Autoupdate Proxy server port
workers.windows2022.autoUpdate.proxy.username string nil Autoupdate proxy username
workers.windows2022.autoUpdate.server.acceptKey bool false Automatically accept the update server's public key.
workers.windows2022.autoUpdate.server.acceptSslCertificate bool false Automatically accept the update server's SSL certificate public key.
workers.windows2022.autoUpdate.server.url string nil Specifies the URL of the update server. Leave empty to set to the default for the Fortify update server
workers.windows2022.controllerProxyHost string nil Specifies a proxy host used to connect to the Controller.
workers.windows2022.controllerProxyPassword string nil Specifies the proxy password.
workers.windows2022.controllerProxyPort string nil Specifies the proxy host port used to connect to the Controller.
workers.windows2022.controllerProxyUser string nil Specifies the username used to connect to the proxy.
workers.windows2022.controllerUrl string nil Specifies the Controller URL. If empty, it is configured automatically based on the endpoint of the Controller installed by the chart. If the chart's Controller is disabled, this property is required.
workers.windows2022.enabled bool false Allows to enable/disable this component
workers.windows2022.image.pullPolicy string "IfNotPresent" Specifies the image pull behavior.
workers.windows2022.image.repository string "fortifydocker/scancentral-sast-sensor-windows" Specifies the Docker Repository name from where to pull the Docker image.
workers.windows2022.image.tag string "25.2.0-windowsservercore-ltsc2022" Specifies the version of the docker image to pull.
workers.windows2022.nodeSelector pod.nodeSelector kubernetes.io/os: windows Defines Node selection constraint configurations to add to resulting Kubernetes Pod(s).
workers.windows2022.os string "windows" Specifies the sensor operating system (linux/windows).
workers.windows2022.persistence.accessMode string "ReadWriteOnce"
workers.windows2022.persistence.accessModes[0] string "ReadWriteOnce"
workers.windows2022.persistence.annotations object {}
workers.windows2022.persistence.enabled bool false Specifies whether to use an external persistent store for the temporary worker data. Using this option in production is not recommended and will be removed in a future release.
workers.windows2022.persistence.selector object {}
workers.windows2022.persistence.size string "10Gi" Specifies the Persistent Volume size. Used when 'existingClaim' is not defined.
workers.windows2022.persistence.storageClass string ""
workers.windows2022.podAnnotations pod.annotations {} Defines annotations to add to resulting Kubernetes Pod(s).
workers.windows2022.podLabels pod.labels {} Defines labels to add to resulting Kubernetes Pod(s).
workers.windows2022.replicas int 1 Number of replicas
workers.windows2022.resources object {"limits":{"cpu":8,"memory":"32Gi"},"requests":{"cpu":8,"memory":"32Gi"}} Resource requests (guaranteed resources) and limits for the pod. The default values are based on a generalized baseline and should be adjusted to the correct size based on the sizing calculations in the ScanCentral SAST documentation.
workers.windows2022.restapiConnectTimeout int 10000
workers.windows2022.restapiReadTimeout int 30000
workers.windows2022.scanTimeout string nil Timeout (minutes) for a scan. When set, scans that exceed the timeout value are cancelled.
workers.windows2022.sscProxyHost string nil Proxy host for connecting to Software Security Center.
workers.windows2022.sscProxyPassword string nil Specifies the proxy password.
workers.windows2022.sscProxyPort string nil Specifies the proxy host port used to connect to Software Security Center.
workers.windows2022.sscProxyUser string nil Specifies the username used to connect to the proxy
workers.windows2022.tolerations pod.tolerations [{"effect":"NoSchedule","key":"os","operator":"Equal","value":"windows"}] Defines Toleration configurations to add to resulting Kubernetes Pod(s).
workers.windows2022.topologySpreadConstraints list [] Implementing spread constraints can be used to balance load and manage "noisy neighbor" scenarios.
workers.windows2022.truststoreSecret string "sc-sast-sensor-truststore" Specifies the Kubernetes secret to store the truststore for server certificate validation
workers.windows2022.truststoreSecretKey string "truststore" Specifies a key in the Kubernetes secret to store the truststore for server certificate validation
workers.windows2022.truststoreSecretPassword string "sc-sast-sensor-truststore-password" Specifies the Kubernetes secret to store a password so that the truststore can validate server certificates
workers.windows2022.truststoreSecretPasswordKey string "password" Specifies a key in the Kubernetes secret to store a password so that the truststore can validate server certificates
workers.windows2022.uuidDerivedFromPodName bool true Specifies whether to assign UUIDs to sensors based on the namespace and pod name. Sensors will have the same UUID on restarts even if persistence is disabled.
workers.windows2022.workerCleanupAge int 168 Specifies the sensor cleanup age.
workers.windows2022.workerCleanupInterval int 1 Specifies the sensor cleanup interval.