Skip to content
Data Archiver Template

icon picker
How To

Quick Guide to easily setup and use the Data Archiver Solution

Welcome to the Data Archiver Template!


This guide contains critical information: you are required to read it all carefully to avoid unwanted data loss. Please, consider this as a simple, handful utility to free-up space on your doc(s). It is not a full-fledged backup facility
The main purpose of the Data Archiver is to offload unused data to keep a light document to work on. Currently, it is not intended to restore data once archived, so destination table cannot be used to put data back. Future releases might provide similar functionalities.
This template works in pair with the Data Archiver Pack. Although not strictly necessary, this template provides almost all the configurations already set-up so that you’re ready to archive in less than one minute.
Keep in mind that “archiving” your data means that each eligible row (according to your configuration) of your source table will be stored into a single column of the destination table (Record ), along with its structure containing column names and column values. All attachments will be stored in a single column (Attachments ).


Please, be aware that if in default mode - i.e. not selecting Backup Only (see below on Configuration) - the source data will be deleted from the source table once archived, and they cannot be restored.

Document Template Sections

There are two visible pages (in fact, just tables) and one hidden section:

A page with the table that contains the data you archived. You can extend this table but we warmly suggest to keep this table as the reference as it contains the schema expected from the pack to correctly work.

Archived Data Schema

these are the required columns to correctly store your data: removing or altering any of the following will cause the pack to fail archiving.
Document Name: Source doucment nameDocument Id: Source document IDTable Name: Source table nameTable Url: Browser link to the source tableTable Id: Source table IDRow Id: Row ID in the source table (note that this might not point to any row after it has removed)Row Href: API link of the archived row (note that this might not point to any row after it has removed)Record: The string representation of the full row in JSON notation (for each column you'll have {id:"ColumnId", name: "ColumnName", value: "ColumnValue"}. Don't worry: you don't have to deal directly with this unless youwant to)Attachments: All the attachments of that row in a sinle column. Meaning that if your source row has several file/image columns, they will be all collected in this column
Also, not mandatory but useful, a couple of other fields have been added for convenience:
Select Column: Drop down that allows you to pick one of the columnn in the recordValue: value of the selected column

This pages contains a simple sync table displaying the outcome of all archiving sessions with relevant data (see schema below). In order to keep track of the history, we suggest to keep un-synch rows turned on (already set for his doc). If you keep only synch’ed rows (i.e. if you uncheck it), your synch table will only display the last archiving session. According to how many rows are affected, it might take some time to perform the full archiving (from few seconds to several minutes).

Session Log Schema

SessionId: Unique identifier for each synchronizationTimestamp: When the session is endedSource Document Name: Name of the doc where is the source table/viewSource Table Name: NAme of the table/view archivedSource Table Url: Browser link pointing at the source tableDestination Table Url: Browser link pointing at the destination table (where data is archived)Number of Rows Archived: Total number of archived rows in the sessionArchive Policy Type: It can be "Time-Based" or "User-defined filter column" based on whether a specific column name is provided (see Configuration) or not (default) Archive Policy Value: The value is based on the previous Type. If Time-Based, it's the number of days, otherwise the column name that has been provided Backup Only: if data is stored in the destination table but not deleted from source afterwards ​Time: The overall time taken by the session ​Message: Outcome message of the session. Either "Successful" or the exception occurred. Note that rows is deleted from source only after they are stored in the archive, so there should not be data loss.

Hidden section containing base tables and utilities usually not necessary for normally using this doc. Change this only if you know what you are doing.


There are few simple steps to make the Data Archiver up and running. Using this template, most of the work has been already set-up for you, so you can skip them if you’re using a doc created from the default template
Install pack [Already done in this doc]: Insert → Packs, search for Data Archiver, then install it.
Set-up your account: In Insert → Packs → Data Archiver → Settings you are required to set up an account that is able to read/write data both from source and destination (this doc)
Add sync table [Already done in this doc]. Select the ArchiveExecutionLog building block and drag it into a page.
Create destination table [Already done in this doc]: it’s the table already provided in this template: and likely you don’t have to do anything else. If you want to change it or build it from scratch, please refer to Archived Data Schema above
Define archiving params (see Configuration below)
Run archiver: Just sync manually the sync table (Session Log) or schedule it in the Settings/Refresh Rate


In order to start your archiving, you need to define these synch table parameters (accessible from Chose what to sync)

Pack Parameters

Account : The account you setup when configuring the pack. You can always change your account.
Source Document URL : This is the full URL of the document containing the table you want to archive
Source Table ID : The table ID of the table or view you want to archive. You can Copy tableId by clicking the kebab-menu (three vertical dots) in side of the table name and then selecting “Copy Table ID”
Destination Document URL : [This document URL: ] The full url of the document of the destination table (i.e. the one containing your archived rows). By default, if using this template, the doc URL will be the document itself, therefore a formula is provided for convenience (thisDocument.ObjectLink() ). In case you are defining another destination doc (different from this doc), you have to past its URL.
Destination Table ID : [Already present if using this template] The Table ID of the destination table. By default, if using the template, the doc URL will be the one present in the of this document. In case you are defining another destination doc (different from this doc), you have to change it accordingly
Days since Modification : The number of after which the archive occurs. E.g. if your row was last updated on March 1st 2023 and you set 30 days, on May 1st it will be automatically archived. This allows to keep your “living” data at hand and only archive data that has not changed since a while. Default is 90 days. Any number <=0 will archive ALL the rows.
Backup Only [Optional]: By default, an archived row is also removed from the original table. If you check this box, you only back-up your data but your rows won’t be deleted from source. Note: remember that if you leave it unchecked, data will be deleted from source and cannot be restored.
Filter Column Name [Optional]: This parameters allows you to specify a custom column where you can put a more custom logic for your archiving policy. For instance, if you need something that is not only related with time, but also some specific rules (such as status, owner, team, etc..) or even to decide it manually ,you might want to add a checkbox to your source data and that column(s) - if checked - will be selected for archiving. Note that if you provide this parameter, then the Days since Modification parameter will be ignored.


Multi synch

You can decide to archive multiple tables from different docs at once. By default they will be archived in the same destination table, but you can decide to split it in different categories. Or even one for each source table.

Multiple sources

As you can notice, in the Pack parameters you can add as many accounts as you want and provide many table configuration, so that when running the synch, all these tables will be archived. If you have many tables to be configured, we suggest to use some named variables to store the parameters that will be used multiple times (likely destination url and destination table).

Multiple destinations

Alternatively, still in the same configuration, you might want to archive different tables in different destinations. In that case, the best way would be to copy/paste the table (not selecting “connected data”, when pasting it) and then provide its/their ids in the different params configurations

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
) instead.