Dear,
We are looking at using Skyvia for data replication between Salesforce and BigQuery. In the documentation, I read that in case of a schema change, the tables of the affected object will need to be dropped and re-created. For our use case, this is not a good option. We are building a warehouse that should keep active and historic Salesforce data. By dropping tables every time a schema change happens, we would lose the data history.
How can we address this issue? Is there another way?
https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html#metadata-changes-and-editing-replication-package