An overwrite mechanism for machine- and human-friendly API documentation

Jun 03 2021

There’s always issues with API documentation. It gets outdated quickly, it’s tedious and halfwhat standardized, and there’s not enough engineers who can write, nor enough writers who are techy enough. Automated documentation is useless, while human-crafted examples and descriptions take too long for the modern breakneck release cycle.

How do tech writers rectify these issues? Is the docs-as-code paradigm good enough to keep up?

I’ve been thinking about this for a long time. Automation and human efforts need to compliment each other. Automation is excellent for repetitive boilerplate, while coherence (context and examples) come from a sentient storyteller, which for now is still done by humans. I wonder how soon comes the day when AI will be generating stories for us.

Overwrite Mechanism #

The Overwrite Mechanism is a collaboration pattern to add a “human touch” to machine-generated API docs. It mirrors how people collaborate on a document.

Imagine you and a partner writing a proposal:

  1. Define scope. “You can work on the introduction and value propositions, and I’ll do the methodology and the statistics.”
  2. Honor the agreement. When we open Google Docs at the same time, we type in different paragraphs.
  3. If we both end up working on the same topic, we will keep the superior writer’s work.
  4. Merge at the end.

Separation and definition of scope is key to prevent clashes. Funny that this concept is basically how Git works.

Docfx #

Microsoft has already implementd the “overwrite” mechanism with their documentation tool Docfx. How their implementation works:

  1. Machine-generated documentation that follows a common standard definition is exported from the source code.
  2. In a separate folder, the writer adds annotation in Markdown files.
  3. The publisher resolves the two inputs into a single output HTML.

The machine-generated documentation is conceptually spliced and assigned unique IDs, specifying where human annotations take priority. Then, the human has to create separate annotation files mapped to the IDs of where they want the annotation to be shown.

To make it easier, Docfx adds an “Edit” button that takes you to a GitHub to begin a draft. The Docfx generator automatically links the Markdown annotations and IDs for you. All that’s left is to click “Edit” and write ✍️.

What’s the issue?

  1. Out of the box, Docfx only works with Swagger v2.0, an outdated REST API standard.
  2. You have to conform to Microsoft’s predetermined naming scheme.
  3. Ironically, they have no documentation on how to create your own data contract. I’m not even talking about a file processor, because they have that documented, but just an example of their REST API schema and the entry points. People are begging for OpenAPI 3.0 specs to be implemented but there’s no info on how to do it yourself.

What do these issues reveal?

  1. Standardizing data contracts is pretty difficult. By the time a powerful tool is released, the specification it was built on is already outdated. Imagine if academic citation standards changed every month. It would drive researchers insane.
    • Microsoft Word has a References feature still using APA 6th edition, and APA 7th was released in 2019.
  1. Accomodating the countless ways people can design their data is an exercise in futility. We could take lessons from Markdown or AsciiDoc and keep it simple enough to be adapted to any use case.

There are very few tools out there that actually specialize in API documentation. Docfx is one of the best available, which comes to no surprise as Microsoft works on it.

If we need the ability to define the human-machine contract, how would I design an overwrite process that’s more human friendly?

1. Auto-generate from source #

Let’s say that a machine can generate a template and fill in most of the details, but where it needs help is specified in <> brackets. An appropriate Markdown template for a REST API might look like:

# Title of Method

<DESCRIPTION annotation>

`GET | POST | DELETE | PUT /relative/url/path`

<REQUEST annotation>

| Param | Type | Description |
|-----------|---------|-----------------------|
| required* | string | <PARAM_1 description> |
| optional | boolean | <PARAM_N description> |

<REQUEST_EXAMPLE annotation>

<REQUEST_AFTERWORD annotation>

### Body

<BODY_DESCRIPTION annotation>

**Example:**

```json
{}
```


| Property | Type | Description |
|-----------|--------|-------------------------|
| property | number | <PROPERTY_1 annotation> |
| propertyN | number | <PROPERTY_N annotation> |

<BODY_AFTERWORD annotation>

### Response

<RESPONSE_DESCRIPTION annotation>

| Status | Content |
|--------|---------|
| 200 | {} |

<RESPONSE_AFTERWORD annotation>

Pretend that some magic tool will fill in the tables and types for you. The machine’s obligations are defined, so now we pull out what the human is reponsible for.

# Dear human,
# Please fill out the rest of the REST API annotations~

DESCRIPTION:
REQUEST:
REQUEST_EXAMPLE:
REQUEST_AFTERWORD:
- PARAM_1:
- PARAM_N:
BODY_DESCRIPTION:
- PROPERTY_1:
- PROPERTY_N:
BODY_AFTERWORD:
RESPONSE_DESCRIPTION:
RESPONSE_AFTERWORD:

# From,
# Machine

2. Human annotation #

The human is reponsible for defining annotations. It could be in a YAML file, or a JSON, or whatever seems to make the most sense.

DESCRIPTION: My cool description!
REQUEST: This request is cool!
REQUEST_EXAMPLE: An example goes here!
REQUEST_AFTERWORD: This will fetch stuff.
- PARAM_1: The id assigned at registration.
- ...
- PARAM_N: Specifying this parameter will filter by activity!
BODY_DESCRIPTION: The body should be formatted for your benefit!
- PROPERTY_1: This represents the key from some other data model!
- ...
- PROPERTY_N: Ad infinitum
BODY_AFTERWORD: Note, sometimes there are caveats and limitations with our implementation!
RESPONSE_DESCRIPTION: The response will give you everything you need!
RESPONSE_AFTERWORD: There is a bug, but here's a workaround!

This file is very simple. As soon as you want to introduce basic formatting, like Markdown, the pipe | allows you to input text literals.

DESCRIPTION: Returns json data about a single user.
REQUEST: This request is cool!
REQUEST_EXAMPLE: |
$.ajax({
url: "/users/1",
dataType: "json",
type : "GET",
success : function(r) {
console.log(r);
}
});

REQUEST_AFTERWORD: This will fetch stuff.
- PARAM_1: The id assigned at registration.
- ...
- PARAM_N: Specifying this parameter will filter by activity!
BODY_DESCRIPTION: The body should be formatted for your benefit!
- PROPERTY_1: This represents the key from some other data model!
- ...
- PROPERTY_N: Ad infinitum
BODY_AFTERWORD: |
**Note:** Sometimes there are caveats and limitations with our implementation.
1. We made the design decision on purpose.
2. If you have other use cases, try something like PlanB.

RESPONSE_DESCRIPTION: The response will give you everything you need!
RESPONSE_AFTERWORD: |
There is a bug, but here's some workarounds:
- OptionA
- OptionB

For the purposes of a theoretical Overwrite Metchanism, this may be enough. The issue is that YAML’s indentation-based format is finicky. But hey, we’re not afraid of working with plain text, are we? At some point, we can use a YAML linter to be our robot editor.


3. Combined output #

Now that we have the source and the annotations contract defined, some magic CICD pipeline will combine the two files. A parser replaces all ANNOTATION_FLAGS with the defined value.

# Show User

Returns json data about a single user.

`GET /users/:id?activity={activity}`

This request is cool!

| Param | Type | Description |
|----------|---------|----------------------------------------------------|
| id* | integer | The id assigned at registration. |
| activity | string | Specifying this parameter will filter by activity! |

**Example:**

```json
$.ajax({
url: "/users/1",
dataType: "json",
type : "GET",
success : function(r) {
console.log(r);
}
});

```


This will fetch stuff.

### Body

The body should be formatted for your benefit!

| Property | Type | Description |
|----------|--------|-----------------------------------------------------|
| id | number | This represents the key from some other data model! |
| name | number | Ad infinitum |

**Note:** Sometimes there are caveats and limitations with our implementation.
1. We made the design decision on purpose.
2. If you have other use cases, try something like PlanB.

### Response

The response will give you everything you need!

| Status | Content |
|--------|------------------------------------------------------------|
| 200 | `{ id : 12, name : "Michael Bloom" }` |
| 404 | `{ error : "User doesn't exist" }` |
| 401 | `{ error : "You are unauthorized to make this request." }` |

There is a bug, but here's some workarounds:
- OptionA
- OptionB

Existing Tools #

I’ve defined my problem and an idea of the kind of solution I’m looking for. Can we find an existing tool that accomplishes the basics?

  1. Define a contract for an API (any API I want, not just web APIs)
  2. Take annotations from a human-readable format
  3. Merge the human and machine inputs
  4. Preview the final output before publishing

Other than Docfx, there are lots of tools meant to keep up with the breakneck pace of software development.

Stoplight Studio fulfills #1, #2, and #4. It’s an API designer and modeler. To be more exact, it’s a JSON schema modeler. It’s not quite a documentation tool, but it provides an interface to add annotations as per the OpenAPI spec. However, it doesn’t address a workflow where OpenAPI files are machine generated from source code. Hence, edits done in Studio will have to be merged back to a “soruce,” somewhere and somehow.

Still, it looks useful for defining the data contract, and for tech writers further down the pipeline who would prefer to work in a nicer UI than plaintext.

Postman fulfills #2-4. It’s a collaboration environment that allows software developers to create web APIs. They offer a documentation editor where you can comment and share your API specs, while Postman handles the website hosting and access control for you.

ReadMe also fulfills #2-4. It’s a documentation service built on Swagger/OpenAPI specifications, and they incorporate a first-class Developer Experience. However, the focus is on REST APIs only. If you use any other protocol, you’ve got to find another solution.

Smithy is a language for defining service contracts. Made by Amazon, it makes sense that one of the largest web providers in the world would need a way to categorize and define their cloud programs, which span multiple computers and networks. Otherwise, all we would do is talk about “service,” which is just as generic as “program.”

No vendor lock-in #

The thing I dislike about Postman is that they use their own API specifications and want to keep people within their system. Thus, we as the consumers lack inoperability of the APIs designed in Postman, as well as any documentation written using their service, and it doesn’t export nicely into a CICD pipeline. In contrast, OpenAPI has a summary field for everything.

Failings of the current tools #

Postman, ReadMe and to an extent Stoplight, are tools that assume that the developer writes. And if developers loved writing, we wouldn’t need technical writers, would we?

So more tools pop up, that allow source code comments to be extracted and turned into documentation. Javadoc is a famous example for the Java language, and Swagger has been widely adopted for REST APIs.

The issue is that the source code should only contain comments, which are 1-2 sentences at most. It isn’t the best place for explanations, which includes examples, relating the code to broader contexts, and more info that would only clutter the file.

If your codebase has access restrictions, other developers won’t be able to look in your source code anyway. Or they’ll ignore the documentation and contact you to explain the whole shebang.

In most organizations, tech writers aren’t considered members of the engineering teams, and cannot edit the source code. While it’s easy to grant a tech writer access to a Git project, it won’t scale when the writer is suddenly working for 10 or 20 teams. I don’t think writers want to read code when they’re hired to write content, nor do engineers want to write content when they’re hired to code.

Let’s blame the tools #

The problem with software engineers failing to write comments and documenting their process is simple: they’re not writers. To get better documentation, the solution has always been to look at social factors rather than tooling:

  • Train engineers to write
  • Train writers on the technology

Optimally, we’d invest in both. Many people will agree that good documentation stems from a culture that encourages writing.

But if you believe in the theory of technological determinism (technology shapes societal and cultural values), this means that the tools we use influence how we think. A blatant example: America is returning to an oral culture after years of written communication dominance, because telecommunications has become better and cheaper with video streaming, video chat, smartphones with camcorders, and apps like instagram, Vine and TikTok enabling the masses to film and share.

If technological determinism is a theory that has merit, we can assume that if our documentation tools are crappy and inconvenient, then we’re going to experience a crappy and inconvenient documentation process.

Is there a tool that lets writers and engineers collaborate better, instead of succumbing to the “email, meeting, unresponsive last-minute procrastination” loop? Do we have a tool that’s specifically made for tech writer + software developer collaboration? I haven’t found one that’s satisfactory. I’d love to see the docs-as-code approach come in a nice, convenient package I can install, but I think most shops have to spin up a custom solution.

If we want to scope by job title, we can say that developers are reponsible for “machine” code, and writers are reponsible for “human” code. But then you get the Frankenstein that is software documentation, which is “human code for writing machine code.”

Blah. At this point, I can’t tell if computer science is a science or an art.

Tests-as-Docs #

Examples make or break documentation, and writing out examples is annoying. Can’t we automate this part too?

I want to explore pulling in tests as part of the documentation. Developers are typically required to write tests. If the source code serves as the Single Source of Truth, then there is no better example than what the developers themselves wrote as examples.

In the Overwrite Mechanism process, I denoted annotations where examples can be placed, such as REQUEST_EXAMPLE. If we modify the process, we can insert test samples there.

This means that in our human-machine contract, a 3rd “actor” is required. This “actor” would be part of the CICD pipeline, scanning for tests in a specific location and inserting them into the artifact, before passing it to the tech writer.

Conclusion #

To sum up this post, I propose an Overwrite Mechanism that will bridge the gap between writers and developers. Store annotations in a separate file from machine-genereated source, but in the same repository/codebase/folder, so that the developers and writers can work in parallel and within the same context. Finally, there needs to be a preview feature to review the output.

  1. Define a “who-writes-what” template for an API
  2. Machine generate from the source code
  3. Annotate the missing parts
  4. Merge the source and the annotations
  5. Publish: apply styling, convert to deliverable format

Somewhere along step 2-4, we can define how to extract test cases to become documentation examples.

If this toolchain exists, it would be the best API documentation tool!


UPDATE Aug 19, 2021: Part 2: Sanitizing OAS files