🚀 White Paper: Fauna Architectural Overview - A distributed document-relational database delivered as a cloud API.
Download free
Fauna logo
Log InContact usStart for free
Fauna logo
Log InContact usStart for free
© 0 Fauna, Inc. All Rights Reserved.

Related posts

Introducing Fauna's New Event StreamingFauna Schema: Flexible, Enforceable, and Driven by Code with Zero downtime MigrationFauna Joins Google Cloud Marketplace

Start for free

Sign up and claim your forever-free Fauna account
Sign up and get started

Table of Contents

Fauna Logs announcement purple

Get visibility into the cost and performance of queries with Fauna Logs

Shadid Haque|Feb 15th, 2023|


We are thrilled to announce the general availability of Fauna Logs with an initial focus on query performance and insights. Fauna Logs allows users to gather valuable insights about their database systems' performance, behavior, and usage. Read on to learn how you can use this new feature.

Why query performance logs are important

By analyzing database query logs, developers can identify slow-performing queries and bottlenecks and optimize for performance and cost. Additionally, query logs can help debug and diagnose application issues by providing a historical record of executed queries.
Continuous monitoring of query logs also enables developers to identify the most common user queries in the application and gain a deep understanding of their application's most frequently used features.

Accessing Fauna Logs

Access logs through the Fauna Dashboard

The easiest way to access Fauna query logs is through the dashboard user interface. Note that this feature is only available on the Team and Business plans.
Head over to Fauna dashboard. From the dashboard menu select user profile icon and then select settings.
Fauna Dashboard
In the account settings menu you have a new option called Logs.
Logs option
Select the Logs option. It takes you to a menu section where you can export query logs from Fauna. From the menu select New Export or Export Logs
In the Query Log menu, select a database. Then select a date range for which you want to export the logs for. Next, select Export Logs.
Query Log Menu
Fauna will generate a log bundle for your database. Select Download Bundle to download your query logs.
Fauna downloads the bundle as a zip file. When you unzip, it gives you a jsonl file. This file contains all your database log information. Each line of this file is a valid JSON object. Below you can see a sample object.
   "REQUEST_HEADERS":"{\"user_agent\":\"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36\",\"x_forwarded_for\":\"\"}",

   "TS":"2023-02-09 02:36:02.290 Z",
Each entry contains the following fields.
BYTE_READ_OPSNumber of https://docs.fauna.com/fauna/current/learn/understanding/billing#read consumed by this query.
BYTE_WRITE_OPSNumber of https://docs.fauna.com/fauna/current/learn/understanding/billing#write consumed by this query.
COMPUTE_OPSNumber of https://docs.fauna.com/fauna/current/learn/understanding/billing#compute consumed by this query.
DATABASEThe "path" to the database for this log entry. Each entry in the path is a database name. The first entry is the "top-level" database, and each entry to the right, when included, represents a child database.
QUERY_TIME_MSQuery processing time required for this query, in milliseconds.
REGION_GROUPIdentifier for the Region Group of this query. legacy represents the Classic region group in the Preview environment.
RESPONSE_CODEHTTP response code for this query. 200 represents ”Success”. In the preceding log entry example, 201 represents "Created" because the associated query created a collection.
TAGSList of tags included with the query. {} indicates that the query included no tags.
TRACEPARENTA W3C-compliant https://w3c.github.io/trace-context/#traceparent-header identifier for the query. If you provided an identifier and the logs do not contain the identifier, you likely provided an invalid identifier, and Fauna generated a valid identifier for you.
TSTimestamp for query processing.
TXN_RETRIESRetry count. The count should be zero unless the query encountered contention.
Similarly, you can export all the query logs for an entire region group. Select Export logs for the Region Group option and select a region while exporting query logs.
Region group

Access Fauna Logs through the command line

For those who prefer the command line, you can pull query logs directly from your terminal. We created a sample command line application to download query logs. You can explore the code for it here.
Clone the repository with the following command.
$ git clone https://github.com/fauna-labs/fauna-query-logs-cli-app
$ cd fauna-query-logs-cli-app
Run the following command to start the CLI application.
$ npm install
$ npm run demo
The CLI tool prompts you to enter your email and password for authentication. After login, you can select a database or a region group to pull query logs from.
> querylogs-demo@1.0.0 demo
> node index.js

✔ Enter the email address for your account … <Your-email>
? Enter the password for your account › <Your-Password>
You select a database/region group, and data range, and the CLI tool downloads the query logs for you.
Once downloaded, you can explore the logs in your favourite tools or in the terminal. For instance, I am analyzing them in Emacs in the terminal.
You can also directly export the data into more sophisticated third-party tools such as Logstash or Kibana for observability.

Tagging queries

Currently, you can use the JavaScript driver to Tag queries. The following code snippet demonstrates adding certain tags to a query using the JavaScritpt driver.
	tags: { key1: "value1", key2: "value2" },


In brief, Fauna Logs give you insight into your database usage and queries. You can optimize your Fauna cost by analyzing query logs and writing efficient queries. In a subsequent tutorial, you will learn how to use query logs with your favorite log metric tools such as Elastic and Datadog. Head over to our documentation to learn more about Fauna Logs.

If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, and serverless databases, Fauna is hiring!

Share this post

‹︁ PreviousNext ›︁

Subscribe to Fauna's newsletter

Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.