Drop Tables on Schema changes?

  • 15Views
  • Last Post 2 weeks ago
0
votes
Jesse Hoosemans posted this 2 weeks ago

Dear,

We are looking at using Skyvia for data replication between Salesforce and BigQuery. In the documentation, I read that in case of a schema change, the tables of the affected object will need to be dropped and re-created. For our use case, this is not a good option. We are building a warehouse that should keep active and historic Salesforce data. By dropping tables every time a schema change happens, we would lose the data history. 

How can we address this issue? Is there another way? 

https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html#metadata-changes-and-editing-replication-package

0
votes
Serhii Muzyka posted this 2 weeks ago

Hi Jesse,

 

Thank you for contacting Skyvia Support.

 

Replication is used to create a copy of cloud application data in a relational database and keep it up-to-date. As a result, you will get a copy with identical schema and data of the Source in your Target Database. At this moment, there are no options available to keep previously created tables. The only available options of configuring a replication package are described in the documentation: https://docs.skyvia.com/data-integration/replication/configuring-replication-package.html#metadata-changes-and-editing-replication-package.

To avoid rewriting or dropping replicated tables, you can select a different Database as a Target to replicate data with changes.

 

Should you have other questions, don't hesitate to reach out.

 

Best regards,

Serhii

Technical Support Engineer

Close