🚀 Fauna Architectural Overview White Paper: Learn how Fauna's database engine scales with zero ops required
Download free
Fauna logo
Product
Solutions
Pricing
Resources
Company
Log InContact usStart for free
Fauna logo
Pricing
Customers
Log InContact usStart for free
© 0 Fauna, Inc. All Rights Reserved.

Related posts

Building Flexible, Dynamic Filters with the Fauna Query LanguageFauna’s 2024 Product RecapAnnouncing General Availability of Fauna Schema

Start for free

Sign up and claim your forever-free Fauna account
Sign up and get started

Table of Contents

NEW
event feeds + data export

Fauna Event Feeds Now GA, Snapshot Export Available in Beta

Wyatt Wenzel & Bryan Reinero |Jan 21st, 2025|

Categories:

FeaturesNewsCompanyEvent Driven Architecture
We’re excited to announce that Event Feeds—a key component of our change data capture (CDC) capabilities—are now generally available for Pro and Enterprise customers. Event Feeds provide a persistent, queryable history of changes in your Fauna database, making it easier than ever to keep your applications in sync, maintain detailed audit logs, and feed external systems with up-to-date information.
By offering a “pull-based” approach (as opposed to our push-based Event Streams), event feeds give you fine-grained control over how and when you process data updates. This flexibility is ideal for synchronizing large datasets to data warehouses, replaying events that happened while your application was offline, or implementing delayed processing scenarios—all while retaining the benefits of Fauna’s serverless, globally distributed architecture.

Introducing Snapshot Export

Additionally, we are thrilled to introduce the beta of Snapshot Export, which complements Event Feeds by allowing you to pull complete, transactionally consistent snapshots of your database. This new feature enables users to seed complementary analytics platforms, vector databases, and search engines (e.g. Snowflake, Clickhouse, Redshift, Algolia, or Pine), while also addressing compliance requirements by giving you full control over your data snapshots.
Fauna delivers these snapshots to an AWS S3 bucket of your choice, using a predictable naming convention (including database IDs and timestamps). Users can create and monitor status of export jobs via our HTTP API, or the Fauna CLI, making it easy to incorporate exports into existing workflows. Snapshot export is available for enterprise customers - check out the docs to learn more and please contact us if you’d like to join the beta.
“With Fauna’s Event Feeds, we will be able to easily pull historical changes on our own schedule, which gives us flexibility for everything from syncing large datasets into our data warehouse to replaying events for audit and compliance,” shared Marcelo Reyna, Head of Infrastructure and Cybersecurity at Differential. “With Snapshot Export, having the ability to capture a full, transactionally consistent view of our entire database means we can confidently integrate with analytics platforms in our stack without missing a beat.”

Getting started with Event Feeds

To begin working with Event Feeds, you’ll first identify an event source—a Set of documents or fields you want to track. Unlike Event Streams, which push updates as they happen, Event Feeds let you pull historical changes on demand.
For example, using Fauna’s JavaScript driver, you can query events from a specific window of time:
import { Client, fql } from "fauna";

const client = new Client();

async function processFeed(client, query, startTs = null, sleepTime = 300) {

  let cursor = null;
  while (true) {
    // Only include `start_ts `if `cursor` is null. Otherwise, only include `cursor`.
    const options = cursor === null ? { start_ts: startTs } : { cursor: cursor };
    const feed = client.feed(query, options);

    for await (const page of feed) {
      for (const event of page.events) {
        switch (event.type) {
          case "add":
            console.log("Add event: ", event);
            break;
          case "update":
            console.log("Update event: ", event);
            break;
          case "remove":
            console.log("Remove event: ", event);
            break;
        }
      }
      // Store the cursor of the last page
      cursor = page.cursor;
    }

    // Clear startTs after the first request
    startTs = null;
    console.log(`Sleeping for ${sleepTime} seconds...`);
    await new Promise(resolve => setTimeout(resolve, sleepTime * 1000));
  }
}

const query = fql`Product.all().eventsOn(.price, .stock)`;
// Calculate timestamp for 10 minutes ago
const tenMinutesAgo = new Date(Date.now() - 10 * 60 * 1000);
const startTs = Math.floor(tenMinutesAgo.getTime() / 1000) * 1000000;

processFeed(client, query, startTs);
In this example, you:
  • Set a start_ts to begin fetching events from a specific point in time.
  • Iterate over returned pages of events, performing logic for each event.
  • Store the returned cursor to efficiently fetch subsequent events later on.

Hands-on example: Event Feeds sample app

Want a more complete example? Check out Fauna’s Event Feeds sample app which uses AWS Lambda and the Fauna Python driver. This application polls for events every 10 minutes and sends them to another service—demonstrating how you can build a serverless, event-driven architecture around Fauna.
Here’s a snippet from the sample app using Python:
import json
import time
from fauna import fql
from datetime import datetime, timedelta
from fauna.client import Client, FeedOptions

def lambda_handler(e, context):

  client = Client()

  # Get the previous feed cursor if it exists
  cursor = None
  options = None

  cursor_data = client.query(fql('Cursor.byName("ProductInventory").first()'))
  cursor = cursor_data.data.value if cursor_data.data.value else None

  # If no cursor exists, capture all events from previous 10 minutes
  if cursor is None:
    # Calculate timestamp for 10 minutes ago
    ten_minutes_ago = datetime.now() - timedelta(minutes=10)
    # Convert to microseconds
    start_ts = int(ten_minutes_ago.timestamp() * 1_000_000)

    options = FeedOptions(
      start_ts=start_ts
    )

  feed = client.feed(fql('Product.where(.stock < 25).eventSource()'), options)

  for page in feed:
    for event in page:
      event_type = event['type']
      if event_type == 'add':
        # Do something on add
        print('Add event: ', event)
      elif event_type == 'update':
        # Make an API call to another service on event
        # (i.e. email or slack notification)
        print('Update event: ', event)
      elif event_type == 'remove':
        # Do something on remove
        print('Remove event: ', event)

    # Store the cursor of the last page
    cursor = page.cursor
    # Store the cursor in the database
    cursor_update = client.query(fql('''
      Cursor.byName("ProductInventory").first()!.update({
        value: ${cursor}
      })
    ''', cursor=cursor))

    print(f'Cursor updated: {cursor}')

  return {
    "statusCode": 200,
    "body": json.dumps({
        "message": "Event feed processed successfully"
    })
  }
This Python example shows how you can integrate event feeds into a serverless function that periodically checks for changes, processes them, and updates its cursor for the next iteration—no manual intervention needed.

Get started today

To get started:
  1. Review the Event Feeds documentation for a deeper dive into capabilities and usage patterns.
  2. Explore the Event Feeds sample app to learn how to integrate event feeds into a serverless architecture.
  3. If you’re building with Cloudflare, the Fauna & Cloudflare serverless workshop incorporates Event Feeds along with additional robust Fauna functionality
  4. Sign up for an account or contact us if you’re not on Pro or Enterprise yet, or want to discuss how event feeds fit into your specific use case.
With Event Feeds and Snapshot Export, Fauna empowers you to manage changes on your terms. We can’t wait to see the powerful solutions you build.

If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, and serverless databases, Fauna is hiring

Share this post

Next ›︁

Subscribe to Fauna's newsletter

Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.