Affected components

Any XCRO service running nodeJS within the container, that is consuming a lot of memory.

Identifying the problem

When subjected to high memory consumption, the container will crash out of memory and kubernetes will spin up a new container. The crashed container will show logs similar to the following:

<--- Last few GCs --->
tart of marking 2299 ms) (average mu = 0.255, current mu = 0.196) final[1:0x55cc9b6484c0]  1492662 ms: Mark-sweep 1398.7 (1425.6) -> 1398.3 (1425.6) MB, 2112.8 / 0.0 ms  (+ 13.6 ms in 12 steps since start of marking, biggest step 8.9 ms, walltime since start of marking 2347 ms) (average mu = 0.176, current mu = 0.096) alloca[1:0x55cc9b6484c0]  1495049 ms: Mark-sweep 1398.9 (1425.6) -> 1398.6 (1426.1) MB, 2213.9 / 0.0 ms  (average mu = 0.124, current mu = 0.072) allocation failure scavenge might not succeed
<--- JS stacktrace --->
==== JS stack trace =========================================
   0: ExitFrame [pc: 0x84eeb55be1d]
Security context: 0x0b2ccf59e6e1 <JSObject>
   1: replace [0xb2ccf5905e1](this=0x1bbd955029f1 <String[0]: >,0x080b91a53959 <JSRegExp <String[1]: >>>,0x2b245b47d921 <String[4]: &gt;>)
   2: execute [0x15cae066ce19] [/app/node_modules/json2xls/node_modules/excel-export/index.js:~29] [pc=0x84eeb7d9f0b](this=0x326e9f3a1559 <Object map = 0x2d160dbe5bf9>,config=0x2217c3106eb1 <Object map = 0x2bbcb106ae79>)...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory


To resolve the problem, the nodeJS process within the container must start with the --max-old-space-size=4096 option. To do this, edit the deployment in kubernetes using kubectl edit deployment <name> -n <namespace> and add the following command line arguments to the container.

    - env:
    - name: NODE_OPTIONS
        value: --max-old-space-size=4096

Specify the --max-old-space-size to the limit you want to set for nodeJS to consume as memory. It defaults to 1.4G on the V8 engine.

Once done, scale down and scale up the deployment.

kubectl scale deployment <name> -n <namespace> --replicas=0

kubectl scale deployment <name> -n <namespace> --replicas=1


The new pod will be up and running with the new configuration.

Once the values have been set, it is possible to see if the container has really taken the new settings by performing the following steps:
1. Get into the pod's shell using the following command
kubectl exec -it <pod-id> -n <namespace> -- sh

2. Run node

3. Execute the following commands in the node shell:

const v8 = require('v8');

const totalHeapSize = v8.getHeapStatistics().total_available_size;

let totalHeapSizaInMB = (totalHeapSize / 1024 / 1024).toFixed(2)

console.log("V8 Total Heap Size", totalHeapSizaInMB, "MB");