I am trying to load a bigger pool of data to one CPT. However, new CSV could have duplicate records of existing listings in GD. What is the best approach to load new CSV without creating thousands of duplicate listing?
I was thinking using new cat name in new CSV, import them, then delete the old categories, and rename the new cat back to old cat names. However, removing old cat only making existing listings become orphan listings, it is even harder to remove.
I also thought of working in DB by making address field as Unique, but this will have other side effect.
Please tell me an efficient way to achieve my purpose. I have projects on both V1 and V2, both facing the same problem.
Also, will you consider to build in features to eliminate duplicated while importing by comparing multiple fields (like Remove Duplicate Data feature in Excel where I can specify multiple fields to compare, if all fields are identical, count it as duplicate and ignore when import, more advanced if allowing user to pre-define which version to keep in case of duplicate, later or existing). With this feature, will make continuous data update much easier.
thanks
Sam