Redis Abort Connection issue in Kubernetes

============Redis Client Error============= 
AbortError: Ready check failed: Redis connection lost and command aborted. 
It might have been processed. 
at RedisClient.flush_and_error (/server/node_modules/redis/index.js:298:23) 
at RedisClient.connection_gone (/server/node_modules/redis/index.js:603:14) 
at RedisClient.on_error (/server/node_modules/redis/index.js:346:10) 
at Socket.<anonymous> (/server/node_modules/redis/index.js:223:14) 
at Socket.emit (events.js:400:28) at emitErrorNT (internal/streams/destroy.js:106:8) 
at emitErrorCloseNT (internal/streams/destroy.js:74:3) 
at processTicksAndRejections (internal/process/task_queues.js:82:21) { code: 'UNCERTAIN_STATE', command: 'INFO', origin: Error: read ECONNRESET 
at TCP.onStreamRead (internal/stream_base_commons.js:209:20) { errno: -104, code: 'ECONNRESET', syscall: 'read' }

This is the error I am experiencing in my kubernetes cluster on GCP. It is a redis pod running. Redis server is up and running, redis-cli ping is returning a pong from within the redis pod. We are seeing this error only intermittently, also when theres not much user traffic. What could be the probable reason for this and any fix?

redis svc yaml file

---
kind: Service
apiVersion: v1
metadata:
  name: stubs-branch-planner-api-redis
  namespace: branch-planner
  labels:
    app: branch-planner-api
    environment: stubs
spec:
  selector:
    app: branch-planner-api
    environment: stubs
  ports:
  - protocol: TCP
    port: 6379
**redis deployment yaml file**
kind: Deployment
apiVersion: apps/v1
metadata:
  name: stubs-branch-planner-api-redis
  namespace: branch-planner
spec:
  replicas: 1
  minReadySeconds: 10
  selector:
    matchLabels:
      app: branch-planner-api
      environment: stubs
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: branch-planner-api
        environment: stubs
    spec:
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      securityContext:
        fsGroup: 101
        runAsNonRoot: true
      containers:
      - name: redis
        image: eu.gcr.io/jl-container-images/platform/redis:5-alpine
        command:
        - redis-server
        - "/redis-master/redis.conf"
        ports:
        - containerPort: 6379
        imagePullPolicy: Always
        resources:
          limits:
            memory: 1024Mi
            cpu: "1"
          requests:
            cpu: 256m
            memory: 256Mi
        securityContext:
          readOnlyRootFilesystem: false
          runAsUser: 101
        volumeMounts:
        - mountPath: /redis-master-data
          name: data
        - mountPath: /redis-master
          name: config
      volumes:
      - name: data
        emptyDir: {}
      - name: config
        configMap:
          name: planner-redis-config
          items:
          - key: redis-config
            path: redis.conf
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: branch-planner-api
                  environment: stubs
              topologyKey: failure-domain.beta.kubernetes.io/zone

Hi @satya1008,

I haven’t seen this before. You don’t happen to know what the overall state of the kube network is at the time do you?

Nope, but this occurs intermittently. Tried increasing pod resources (both replicas and pod limits). Not working