Transfer data in Fauna to your analytics tool using Airbyte
We are excited to introduce Fauna’s new Airbyte open source connector and Airbyte Cloud Connector. The Airbyte connector lets you replicate Fauna data into your data warehouses, lakes, and analytical databases, such as Snowflake, Redshift, S3, and more.
Why Airbyte
With the proliferation of applications and data sources, companies are often required to build custom connectors for data transfer across their architectures. Most of these ETL (extract, transform, and load) tools require maintaining and updating the connectors as requirements change over time. Airbyte is an open source data pipeline platform that eliminates this burden by offering a robust connector ecosystem that can scale without having to maintain the connector itself.
Why Fauna
Fauna is a distributed document-relational database delivered as a cloud API. Developers choose Fauna’s document-relational model because it combines the flexibility of NoSQL databases with the relational querying and ACID capabilities of SQL databases. This model is delivered as an API so you can focus on building features and not to worry about any operations or infrastructure management.
Why Fauna + Airbyte
Fauna and Airbyte both enable improved productivity and developer experience - together, the connector will allow developers to port and migrate transactional data in Fauna to your choice of analytical tools to drive business insights.
Continue reading for a guide on how to configure the Fauna source connector to transfer your database to one of the data analytics or warehousing destination connectors supported by Airbyte.
The Fauna source supports the following ways to export your data:
- Full refresh append sync mode copies all of your data to the destination, without deleting existing data.
- Full refresh overwrite sync mode copies the whole stream and replaces data in the destination by overwriting it.
- Incremental append sync mode periodically transfers new, changed, or deleted data to the destination.
- Incremental deduped history sync mode copies new records from stream and appends data in the destination, while providing a de-duplicated view mirroring the state of the stream in the source
Prerequisites
You need a destination database account and need to set up the Data Build Tool (dbt™) to transform fields in your documents to columns in your destination. You also need to install Docker.
Create a destination database account
If you do not already have an account for the database associated with your destination connector, create an account and save the authentication credentials for setting up the destination connector to populate the destination database.
Set up dbt
To access the fields in your Fauna source using SQL-style statements, create a dbt account and set up dbt as described in the Airbyte Transformations with dbt setup guide. The guide steps you through the setup for transforming the data between the source and destination, and connects you to the destination database.
Install Docker
The Fauna connector is an Airbyte Open Source integration, deployed as a Docker image. If you do not already have Docker installed, follow the Install Docker Engine guide.
Step 1: Set up the Fauna source
Depending on your use case, set up one of the following sync modes for your collection.
Full refresh sync mode
Follow these steps to fully sync the source and destination database.
1- Use the Fauna Dashboard or
fauna-shell
shell to create a role that can read the collection to be exported. The Fauna Source needs access to the Collections resource so that it can find which collections are readable. This does not give it access to all the collections, just the names of all the collections. For example:CreateRole({
name: "airbyte-readonly",
privileges: [{
resource: Collection("COLLECTION_NAME"),
actions: { read: true }
}],
})
Replace
COLLECTION_NAME
with the collection name for this connector.2- Create a secret that has the permissions associated with the role, using the
name
of the role you created. For example:CreateKey({
name: "airbyte-readonly",
role: Role("airbyte-readonly"),
})
{
ref: Key("341909050919747665"),
ts: 1662328730450000,
role: Role("airbyte-readonly"),
secret: "fnAEjXudojkeRWaz5lxL2wWuqHd8k690edbKNYZz",
hashed_secret: "$2a$05$TGr5F3JzriWbRUXlKMlykerq1nnYzEUr4euwrbrLUcWgLhvWmnW6S"
}
Save the returned
secret
, otherwise, you need to create a new key.Incremental append sync mode
Use incremental sync mode to periodically sync the source and destination, updating only new and changed data.
Follow these steps to set up incremental sync.
1- Use the Fauna Dashboard or
fauna-shell
to create an index, which lets the connector do incremental syncs. For example:CreateIndex({
name: "INDEX_NAME",
source: Collection("COLLECTION_NAME"),
terms: [],
values: [
{ "field": "ts" },
{ "field": "ref" }
]
})
Replace
INDEX_NAME
with the name you configured for the Incremental Sync Index. Replace COLLECTION_NAME
with the name of the collection configured for this connector.|Index values|Description|
| --- | ----------- |
|`ts`| Last modified timestamp.|
|`ref`|Unique document identifier.|
2- Create a role that can read the collection and index, and can access index metadata to validate the index settings. For example:
CreateRole({
name: "airbyte-readonly",
privileges: [
{
resource: Collection("COLLECTION_NAME"),
actions: { read: true }
},
{
resource: Index("INDEX_NAME"),
actions: { read: true }
},
{
resource: Indexes(),
actions: { read: true }
}
],
})
Replace
COLLECTION_NAME
with the name of the collection configured for this connector. Replace INDEX_NAME
with the name that you configured for the Incremental Sync Index.3- Create a secret key that has the permissions associated with the role, using the
name
of the role you created. For example:CreateKey({
name: "airbyte-readonly",
role: Role("airbyte-readonly"),
})
{
ref: Key("341909050919747665"),
ts: 1662328730450000,
role: Role("airbyte-readonly"),
secret: "fnAEjXudojkeRWaz5lxL2wWuqHd8k690edbKNYZz",
hashed_secret: "$2a$05$TGr5F3JzriWbRUXlKMlykerq1nnYzEUr4euwrbrLUcWgLhvWmnW6S"
}
Save the returned
secret
. You need to enter the secret in step 2 of the Install Docker procedure. It is important to save the key, otherwise, you need to create a new key if you lose the provided secret.The Fauna source iterates through all indexes on the database. For each index it finds, the following conditions must be met for incremental sync:
- The source must be able to
Get()
the index, which means it needs read access to this index. - The source of the index must be a reference to the collection you are trying to sync
- The number of values must be two.
- The number of terms must be zero.
- The values must be equal to:
{"field": "ts"} {"field": "ref"}
All of the above conditions are checked in the order listed. If a check fails, it skips that index.
If no indexes are found in the initial setup, incremental sync isn't available for the given collection. No error is emitted because it can't be determined whether or not you are expecting an index for that collection.
If you find that the collection doesn't have incremental sync available, make sure that you followed all the setup steps, and that the source, terms, and values all match for your index.
Step 2: Deploy and launch Airbyte
Deploy and launch locally
- Refer to the Deploy Airbyte instructions to install and deploy Airbyte. Enter the following commands to deploy the Airbyte server:
git clone https://github.com/airbytehq/airbyte.git cd airbyte docker-compose up
- When the Airbyte banner displays, launch the Airbyte dashboard at
http://localhost:8000
. - To log in, enter the default credentials found in the .env file of the cloned repository:
BASIC_AUTH_USERNAME=airbyte BASIC_AUTH_PASSWORD=password
- Choose the Connections menu item to start setting up your data source.
- In the Airbyte dashboard, click the + New connection button
Deploy and launch with Airbyte Cloud
- Refer to the Getting Started with Airbyte Cloud guide for the fastest and most reliable way to run Airbyte.
- If this is your first time, go to https://cloud.airbyte.com/signup, sign up for an Airbyte account, and click Creat your first connection.
Otherwise, log in to Airbyte, and in the left navigation panel, choose Connections. Choose an existing connection to make changes.
To create a new connection, click the New connection button and continue with the next step.
Step 3: Set up the Fauna source
- In the Source type dropdown, choose Fauna, which lists the configurable Fauna connector parameters. If you previously set up a source, click the Use existing source button to choose that source. A Setup Guide in the right-side panel also gives detailed setup instructions.
- Set the following required parameters:
- After setting up the source, click the Set up source button.The "All connection tests passed!" message confirms successful connection to the Fauna source. This minimally confirms:
- The secret is valid.
- The collection exists.
Step 4: Set up the destination
- If you previously set up a destination, click the Use existing destination button to select and use that destination. Otherwise, choose the Destination type.
- Destination connector configuration parameters are unique to the destination. Populate the Set up the destination fields according to the connector requirements, including authentication information if needed. A Setup Guide is provided in the right-side panel with detailed setup instructions.
- When you are done, click the Set up destination button and wait for the destination testing to successfully complete.
Step 5: Set up the connection
In the New connection window, accept the default settings or make the changes you want for syncing the source and destination.
- Enter a descriptive name for the connection in the Connection name field.
- Choose a Replication frequency, which is the data sync period.You can choose the Manual option to manually sync the data.
- In the Destination Namespace field, click the Edit button to choose a destination namespace where the data is stored. Options include:
Click the Apply button when you are done.
- In the Non-breaking schema updates detected field, choose Ignore or Disable connection for how Airbyte handles syncs when it detects a non-breaking schema change in the source.
- In the Activate the streams you want to sync section, click the
>
arrow to expand the available fields:The document is deleted if it is not modified in thettl
time interval. The default value isnull
for not used. After document deletion, it is not displayed in temporal queries and the connector does not emit adeleted_at
row. - Select
ref
as the Primary key. This uniquely identifies the document in the collection. - In the Sync mode click the options to choose the combination of options that define the source sync behavior:Fewer than four options indicates that the index is incorrectly set up. See Step 1: Set up the Fauna source. A new incremental sync gets the full database, the same as a full sync.
- Choose the Normalization data format:
- Click the + Add transformation button to add the dbt transform.To extract the fields in the source
data
column, you need to configure dbt to map source data to destination database columns. For example, the following SQL-based query extracts thename
,account_balance
, andcredit_card/expires
fields from the sourcedata
column to populate three separate columns of the destination data:with output as ( select data:name as name data:account_balance as balance data:credit_card:expires as cc_expires from airbyte_schema.users ) select *from output
- Click the Set up connection button.
Step 6: Sync the data
On the Connection page for the connection you created, click the Sync now button if the sync hasn't already started.
The time to run a sync varies with the status displayed in Sync History. When the sync completes, the status changes from Running to Succeeded and shows:
- The number of bytes transferred.
- The number of records emitted and committed.
- The sync duration.
Click the Cancel sync button to cancel a sync in progress.
Step 7: Verify the integration
When the sync completes, click the Sync Succeeded to view the Sync History.
Confirm that the database has transferred successfully by opening and viewing the destination database.
Conclusion
Integrating Fauna with the Airbyte open source and Cloud solutions arms developers building on Fauna with a powerful tool for gaining insights into the operational data living on Fauna. If you have any questions about the open source or Cloud connectors, feel free to reachout to us and ask questions in our forum or on our Discord.
If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, and serverless databases, Fauna is hiring
Subscribe to Fauna's newsletter
Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.