Description
I’m encountering an issue when replicating a large dataset (approximately 500 MB) on an iPad. During replication, the device’s memory eventually becomes saturated, and the app crashes and restarts. After restarting, replication does not resume properly; instead, I receive the following error:

Steps to Reproduce:
1. Replicate a collection of roughly 500 MB on an iPad (about 1 Gb with all the collections combined).
2. Allow replication to run until the iPad memory is saturated.
3. Once the memory is exhausted, the app crashes and restarts.
4. On restart, replication does not resume, and instead, the worker displays the error mentioned above.
Hypothesis:
My suspicion is that due to the crash, the app did not manage to complete writing all the data into the OPFS storage. As a result, only part of the data gets stored, leading to a corrupted JSON file (with, for example, a missing closing bracket). When the replication resumes and attempts to read from this file, it runs into a parsing error because of the corrupted data.
Questions / Requests for Assistance:
1. Is it possible that an unexpected crash during replication would leave the OPFS storage in a corrupted state?
3. Are there recommended practices or changes in RxDB that could help prevent this issue?
4• Is there a way to safely reset the database or detect/recover from such a corruption when an app crash occurs?
5• Any suggestions on handling large replication tasks on resource-constrained environments like an iPad to avoid such crashes?
Additional Notes:
• Since I cannot directly access the OPFS storage on the iPad to inspect the file contents, any guidance on remote debugging or logging would be helpful.
• "rxdb": "16.9.0"
• "rxdb-premium": "16.9.0"
• Platform: iPad (using OPFS for storage)
I appreciate any help or suggestions on how to resolve this issue. Thank you!