Video: Fauna CEO Evan Weaver in Conversation With Alexy Khrabov
In January 2017, Fauna hosted the SF Scala meetup. Alexy Khrabov, organizer of SF Scala, took a few minutes to catch up with our CEO and co-founder, Evan Weaver.
Alexy: Hello everybody! I am Alexy Khrabov. Organizer of SF Scala, here on location at Fauna and with us we have Evan Weaver, CEO and founder of Fauna. We actually met, I would say, about a year ago or more and talked about Fauna as a new kind of database. Now Fauna has opened their doors for the SF Scala meetup for the first time. So we’re really excited to be here and to catch up and see what’s new with Fauna.
Evan: Thanks Alexy. Yeah, we moved into this place about two months ago. Before that we were in the top floor of a converted Victorian near South Park. That’s where I first met you. Before that we were in my basement, a long time ago.
We’re 14 people now, preparing for public launch of FaunaDB in March. So, we’re rolling out previews, especially previews of the Serverless Cloud product, and basically finishing up the business side of things with billing systems, improving tutorials, documentation…all that that kind of stuff that you need to use Fauna as a self-service adopter in particular.
We’re preparing for public launch of FaunaDB in March.
The database has been in production for a couple years now. In particular NVIDIA is our biggest early partner. They have an on-premises deployment which is in 3 data centers, and a couple dozen nodes, and they scale it up and down over time.
Alexy: So I remember we talked about Fauna and I thought, that’s really a cool way to use the database, right, because it’s more than a key value store. It’s an object store. Can you talk a little bit about Fauna as a core database? What does give you more than traditional databases give you?
Evan: Yes. Fauna is a temporal, object-relational, consistent, distributed database. Our goal is to basically marry all the kinds of different query patterns you would typically use from different domains like relational, document, graph, even search and analytics eventually, into a single coherent system. So you can scale that underlying piece of infrastructure and use it for all your different workloads, and share data across teams. Share data across data centers, because it’s a globally distributed system as well.
Fauna is a temporal, object-relational, consistent, distributed database.
Basically, get back to a world where you can integrate through the database and trust that the database will be the single lever you can move when you need to scale your application up and down.
Alexy: Right. And so and for this you need basically…you’re already everything inside the database…analytics queries…inside the database. So you have your own DSL for the queries, which you control, basically?
Evan: Yeah, the the interface is a functional interface, similar to LINQ in the C# world. You use embedded DSLs in your application languages. That means that your queries are type safe. You don’t have to learn a new syntax. You just have to learn Fauna semantics.
The interface is a functional interface, similar to LINQ in the C# world.
It’s pretty functional too, in a functional programming way. You do things like map and fold over the core database primitives, page through indexes, and you can do compute and you can do set arithmetic and that kind of thing. Ultimately you can write a very rich query, even more rich than SQL allows, remote it to the database, and trust that the database will execute in the the maximally optimal way against the underlying data.
Alexy: Interesting, I already like it. I like my query language to be more Fauna than SQL. If it’s functional, that’s even better. This is closer to the functional languages, right, because the problem with SQL is that it’s so declarative, right. So basically you can lose track and eventually create a bunch of temporary tables, and decompose it in weird ways.
So how does it feel to write like a long complicated Fauna query program?
Evan: Yeah, it’s exactly like that. You’re dealing with immutable data structures and parallel computation. But the computation is explicit. You essentially compose the query plan in your in your query, so you know what the database is going to do. It’s not going to change its mind and start optimizing in a different way as your data set grows.
You don’t have to run EXPLAIN to understand which indexes it’s going to use. You say these are my indexes, I’m going to start with them, and I want to do these operations, and I want to do this set algebra, get some results, cursor through it, that kinda thing. So it’s very explicit. That lets us guarantee a very consistent performance profile as you scale your systems up and down. You can trust that the execution pattern isn’t going to change.
You can trust that the execution pattern isn’t going to change .
Alexy: Interesting. So how do you feed your data, like, I think there is a lot of activity in open source to put together data pipelines, right. So you have Kafka feeding Spark, feeding Cassandra, on top of Mesos. How do you compare to the SMACK stack, effectively?
Evan: Fauna…in a way it’s a traditional relational database. It’s just not SQL. So you can use it as a sink for data. You compute somewhere else. But you can also keep your core business objects in a fully normalized way, compose them with queries, push a lot of that, especially index computation that you might do in a second system like Spark or ElasticSearch, into Fauna. It works as a source, too, because the underlying data model is temporal.
In a way it’s a traditional relational database. It’s just not SQL.
You can get change feeds for any query out of the system. So you can say, like, oh here’s a distributed graph join, an activity feed. What happened since the last time I looked at it? And you get a bunch of, rather than just a different results, you get a bunch of change events that you can use to synchronize something downstream.
You can get change feeds for any query out of the system.
So, the goal eventually is to internalize all those concerns. Right now it works really well as a canonical store for fully normalized business objects, in particular social graphs, that kind of thing. Or as a low-latency distributed sink for stuff you might compute upstream. Like you want essentially a document database that you can index and scale, because you have all this offline computation from a null process. So where are you going to put it?
Alexy: Well, while this sounds really, really cool. It’s almost like, if I want to build a start-up, I can take this and solve problems. So I wonder what is the impression from customers? What are their early experiences? Where are they taking this, instead of a bunch of other things. What are they saying?
Evan: I mean, usually people don’t believe it can be true! If you get all your queries isolated you can dynamically provision where your data is, you can use our serverless cloud, and not even think about the backend at all.
People have been so badly burned, in particular by the NoSQL movement, that they want us to show them. So we have to do a lot of work to prove that the database is sound and secure and performance is good and show that on their data sets we can import it. Here’s the query patterns. You can replicate what you already do. But now you can get all these scalability, isolation, and performance benefits out of the system.
People like the interface. They’re tired of SQL. It’s unsafe. It’s hard to reason about the performance profile and the security model. It doesn’t work with the way people write modern applications. The proof of that is people put ORMs in front of their relational databases for operational workloads. Not having to deal with that, and being able to directly talk to your database again, it’s…what’s the word…refreshing.
People are tired of SQL. It’s unsafe. It’s hard to reason about the performance profile and the security model. It doesn’t work with the way people write modern applications.
Alexy: How is your go-to-market stage. Are you out of stealth?
Evan: Yes, we’re out of stealth. We have beta customers both on-premises and in the serverless cloud. We’re looking for more. We’re preparing for general availability launch in the next couple months.
Alexy: So if somebody wants to try Fauna, how do they go about it?
Evan: Just go to the website click the request invite link, and we’ll hook you up.
Alexy: And so can you demo this at the meetup with a realistic application?
Evan: Yes, Chris Andersen, who is one of the Couchbase founders, joined our team recently. He’s working on a serverless presentation that will have AWS lambda executing the compute and Fauna as the backend. Then you have a fully serverless end-to-end stack where you never have to think about provisioning and you can even build a globally distributed dynamic application in a fully serverless model.
You can even build a globally distributed dynamic application in a fully serverless model.
Alexy: I’ve got dibs on this presentation when it’s ready. It sounds really exciting. It makes a lot of sense. I can’t wait to see it. So thank you very much for enlightening us. Looking forward to playing more with this.
Evan: Thank you.
Alexy: Once we play some more with this we’ll come back with more questions.