Why does the performance of the existing collection degrade when large data are inserted into a new collection? #38174
Unanswered
pipi-olo
asked this question in
Q&A and General discussion
Replies: 1 comment 5 replies
-
Theoretically, bulkinsert doesn't affect search performance in cluster mode. Bulkinsert doesn't generate growing segments, data is read and sealed segment is generated, the new segment is loaded only after it is fully indexed. Your milvus is a cluster, not standalone? |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi.
I installed Milvus on kubernets using helm chart. (v2.5.15)
While inserting data into Collection B using Bulk Insert, the QPS for Collection A drops.
What could be the reason for this?
(I used VectorDBBench to measure the QPS performance.)
I thought the reason is that the query node's growing segments have increased due to subscribe kafka. I created a new resource group and deployed Collection A.
However, even with this setup, when i insert large data into a new collection B, the performance of the existing collection A drops.
Also, I read that bulk insert does not publish data to Kafka.
What is the reason for this, and how can it be resolved?
Beta Was this translation helpful? Give feedback.
All reactions