Dump.rdb file size in redis

Hi Folks,

My team is facing below issue with 3 node redis-cluster.
currently the size of the dump.rdb file is around 7.5 GB.

total number of keys it is showing as 7.5 crore plus. but when we are checking keys in our major indices it is not even greater than 20 lakhs.

another thing is that when we are deleting the keys ( tried both ft.del and del ) from indices and doing bgsave space is not coming down.

I am not sure if there is any other way to free up the dump file size. Can someone assist and guide on this issue?

[stapp@hc9t07428 ~]$ ll -h /opt/mount1/redis/dump.rdb
-rw-r–r-- 1 stapp csst 7.5G Nov 12 07:05 /opt/mount1/redis/dump.rdb
[stapp@hc9t07428 ~]$ ll /opt/mount1/redis/dump.rdb
-rw-r–r-- 1 stapp csst 8021632602 Nov 12 07:05 /opt/mount1/redis/dump.rdb
[stapp@hc9t07428 ~]$ date
Fri Nov 12 07:47:21 UTC 2021
[stapp@hc9t07428 ~]$ redis-cli info memory

Memory

used_memory:25089152288
used_memory_human:23.37G
used_memory_rss:25593434112
used_memory_rss_human:23.84G
used_memory_peak:25092635408
used_memory_peak_human:23.37G
used_memory_peak_perc:99.99%
used_memory_overhead:4145798508
used_memory_startup:561328
used_memory_dataset:20943353780
used_memory_dataset_perc:83.48%
allocator_allocated:25089238136
allocator_active:25152786432
allocator_resident:25653260288
total_system_memory:33547927552
total_system_memory_human:31.24G
used_memory_lua:37888
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.00
allocator_frag_bytes:63548296
allocator_rss_ratio:1.02
allocator_rss_bytes:500473856
rss_overhead_ratio:1.00
rss_overhead_bytes:-59826176
mem_fragmentation_ratio:1.02
mem_fragmentation_bytes:504302848
mem_not_counted_for_evict:0
mem_replication_backlog:1048576
mem_clients_slaves:33972
mem_clients_normal:271776
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0
active_defrag_running:0
lazyfree_pending_objects:0
[stapp@hc9t07428 ~]$ redis-cli info keyspace

Keyspace

db0:keys=76753525,expires=0,avg_ttl=0
[stapp@hc9t07428 ~]$

You mentioned indices. Are you using RediSearch by any chance?

Hello Kyle,

we are not using RediSearch. when I said Indices I was actually refererring to below.

for example.
$ redis-cli ft.search casealert “@processed:Y” limit 0 0

  1. (integer) 10001
    $ redis-cli ft.search casetable “@processed:Y” limit 0 0
  2. (integer) 25228
    $ redis-cli ft.search casetable “*” limit 0 0
  3. (integer) 50619
    $ redis-cli ft.search caserequest “*” limit 0 0
  4. (integer) 65482

in above commands casealert, casetable, caserequest are the indexes I am referring to. we are deleting keys from these indexes daily and doing bgsave manually also. but no matter what we do the dump file space keeps on increasing daily and it may be causing API calls to db fail.

$ ll dump.rdb
-rw-r–r-- 1 stapp csst 8101144253 Nov 13 04:45 dump.rdb
$ ll -h dump.rdb
-rw-r–r-- 1 stapp csst 7.6G Nov 13 04:45 dump.rdb
$ date
Sat Nov 13 05:13:45 UTC 2021
$

since yesterday size of dump has gone up by 0.1 GB and keys in db have also gone up by around 5 lacs. I do not know what is increasing these many number of keys and size of dump.

[ redis]$ redis-cli info keyspace

Keyspace

db0:keys=77207192,expires=0,avg_ttl=0

Hello Kyle,

Pardon me for my earlier response. we are using RediSeach only.

Hello Folks,

Any idea how can I proceed in this situation. any suggestions would be highly appreciated :slight_smile: .

Good day folks,

does anyone know how to get paid support for redis related problems? Is there any dedicated group that supports redis 24*7?

Thanks,
Akash

Hi Akash,

Redis, the company, provide 24/7 support, but only if you’re a customer: https://redis.com/

Regarding your dump question, you can open an issue here: Issues · RediSearch/RediSearch · GitHub

Kyle