Webinar: Personalizing your web apps with Fauna & Netlify | October 20th at 1:00 p.m. PT
Register now
Fauna logo
Product
Solutions
Pricing
Resources
Company
Log InContact usSign Up
Fauna logo
Pricing
Customers
Log InContact usSign Up
© 2022 Fauna, Inc. All Rights Reserved.

Get Started

Lorem ipsum dolor sit amet, consectetur adipiscing elit.
history-of-apis

Request Demo

Lorem ipsum dolor sit amet, consectetur adipiscing elit.
history-of-apis

Table of Contents

history-of-apis

The quest for the universal protocol: Mainframes, databases, and CORBA

Evan Weaver|Jan 18th, 2022
At Fauna, we have invested in GraphQL in order to offer developers a familiar way to manage their database access. GraphQL allows us to merge the best qualities of accessible web APIs with powerful lower-level RPCs and protocols.
We have written a series of blog posts that cover GraphQL’s history and our current and future product strategy as it relates to GraphQL. In this first post, we will explore the evolution of network APIs from the mainframe era through the client-server era of the 90s.

The network is the computing problem

There are lots of names to describe the way computers encode information to pass over a network to each other at the application-specific level: interchange format, wire protocol, remote procedure call protocol, application programming interface, binary programming interface, codec. We can break down a protocol into two parts: the semantics (what the data means), and the format (how the data is stored).
Sometimes these things are completely ad-hoc, in the sort of “the program is what the program does” sense, but usually, they are partially specified in some kind of standard for interoperability purposes. Let’s start at the beginning.

Beginnings

Some of the first network protocols in widespread use in the 70s and 80s were the terminal command sequences that powered what we would now call “dumb” or “thin” clients for mainframes, things like the DEC VT52 and the VT220. These would be connected by a serial port or other directly-attached proprietary signal bus to the mainframe, but rather than receive some kind of encoded VGA signal (analog) or frame-by-frame dump of the ASCII contents of the application view (binary), they sent and received ANSI escape codes that represented more sophisticated manipulation of the screen contents.
541px-DEC VT100 terminal

DEC VT100 video terminal, released in 1978

Jason Scott, CC BY 2.0, via Wikimedia Commons

Even though it is technically only layer 6 of the OSI model, this, to me, is an API. It’s great if you want to manipulate 2D screens of ASCII text. It’s not very good for doing anything else! Nevertheless, we still use it today in all of our shells which run in software-emulated terminals, and occasionally if we need to use the telnet protocol to access some embedded device. If anything shows the staying power of a useful API, it’s that.

Proprietary protocols

Through the 80s, dumb clients were made smarter and evolved into workstations, and use cases for network APIs started to bifurcate.
Sun SparcStation

Sun SPARCstation 1+, released in 1990

Fourdee at English Wikipedia, Public domain, via Wikimedia Commons

On the one hand, we had clusters of workstations in an office that needed to access a database over local-area networks. This was typically a relational database like DB2 or Oracle, and later Microsoft SQL Server, MySQL or PostgreSQL. In this world, the server was the special snowflake and initial wire protocols were essentially equivalent to the in-memory format of the data in the server itself—opaque, binary formats with minimal or no security, designed for the convenience of the database server on the other end. Any extensibility was limited to versioning the protocol so that if clients and servers both implemented the chosen version things would work. We see this strategy reflected in the on-disk file formats for things like Microsoft Word prior to the DOCX format as well.
There is an obvious upside to this technique—it’s fast and complete. There is an obvious downside as well, it’s extremely brittle and impossible to make interoperable with other similar software, even sometimes different versions of the same software from the same vendor. The APIs for all popular RDBMSs remain in this binary, proprietary category and cause great pain for new vendors that need to implement bug-for-bug compatibility to be compatible with existing clients and workloads.

Open standards and collaboration

At the same time, through the efforts of DARPA and the US academic system, early internet technology like ARPANET was deployed and implemented. Mid-level networking protocols like TCP/IP allowed computers that had no common implementation or control to start talking to each other, and APIs like SMTP for email, FTP for file transfer, and eventually, Gopher and HTTP for information access were invented and deployed. To me, the key difference between these WAN protocols and the LAN protocols is not efficiency — after all, the WAN is much lower throughput and higher latency than the LAN — but control. With internet protocols, no one person controls both the client and the server. There is no single codebase that serves as an ad-hoc specification for the interface. Despite attempts to follow a spec, there is no guarantee that anyone else you need to communicate with does.
This encouraged two things that were formative in the development of the web: the human-readable API, and the robustness principle, also known as Postel’s law: “be conservative in what you send; be liberal in what you accept.”
Whereas the proprietary protocols’ principle goal was correctness and completeness in exchanging data with a single other application, the open protocols’ goal was to maximize communication with many applications completely unknown and unpredictable to the designers of the protocol.
Arpanet in the 1970s

Map of early ARPANET connectivity

Semaforo GMS, CC BY-SA 4.0, via Wikimedia Commons

Nevertheless, these protocols were still essentially use-case driven — interoperable, but not extensible, which plagues us to this day with things like email spam because the cost of breaking compatibility with the existing deployed landscape is too high.
Spam will be with us until the heat death of the universe, but can’t we do better and create some truly general-purpose protocols? It turns out we can.

General-purpose protocols

To me, the thing that makes a protocol general purpose is a schema or interface definition ability. Whether it’s procedural or declarative, static or dynamic, the ability to communicate semantics in a machine-readable way elevates an API from serving a single use case to serving an unlimited number of future use cases.

CORBA

The earliest widely-used general-purpose protocol in my opinion is CORBA. CORBA is a complete stack including an interface definition language (IDL), platform- and network-independent data types, and a network protocol. It even offers options for pass-by-reference and pass-by-value communication semantics.
CORBA

Transparent object access over the network, theoretically

By Alksentrs, CC BY-SA 3.0, via Wikimedia Commons

If that last point made you pause, it probably should. CORBA’s largest criticism was its assumption that the network was transparent to the application. Essentially, it proposed that there was no practical difference between object access and function calls on the program heap, remote access and function calls over a LAN, or even more remote access over the internet.
This idea was very popular at the time and gave rise to things like COM embedding in Microsoft Windows, which seemed great in theory, but in reality was equivalent to iframes on the web. Not very useful and not very secure. It turns out the performance, reliability, and security context of the access really matters, and assuming every application might want to access every other application creates a real mess.

Too general, or not general enough?

CORBA is fundamentally a low-level protocol that reflects the organization and metaphors of a single-machine computer program. It was built on a model of in-process function calls and data types. As programs grew in complexity, these meta-semantics started making less and less sense. What if you want to send requests that don’t correspond to any individual thing within the client or the server? What if you want to negotiate the semantics of the request and not just check if it’s possible? Also, what if you need to inspect my API requests without having special tools and access to the IDL?
CORBA tried to solve the general-purpose application interchange problem; it didn't attempt to solve the single-purpose information exchange problem. These parallel efforts to solve the information exchange problem gave rise to HTTP in the 90s. Later XML attempted to merge the two problems together and find a truly general solution. XML failed to accomplish its goals, but it was a step towards the modern era with the much more successful and promising GraphQL.

Next in the series

The next post picks up with XML and articulates why and how GraphQL came to be.
If you are interested in diving into Fauna’s GraphQL interface, sign up for a free account now, read our documentation, or watch our GraphQL workshop.

If you enjoyed our blog, and want to work on systems and challenges related to globally distributed systems, serverless databases, GraphQL, and Jamstack, Fauna is hiring!

Share this post

TWITTERLINKEDIN
‹︁ PreviousNext ›︁

Subscribe to Fauna's newsletter

Get latest blog posts, development tips & tricks, and latest learning material delivered right to your inbox.