0% found this document useful (0 votes)
35 views82 pages

Eversana API Design BestPractices

Uploaded by

Ram Kale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views82 pages

Eversana API Design BestPractices

Uploaded by

Ram Kale
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Anypoint Platform Architecture

A document that describes the Best Practices for API design for your
enterprise

Prepared by: Vikas Tarade

Integration / Solution Vikas Tarade


Architect (Client):

Senior Business Analyst:

C4E Leader:

Version: 0.1

Date: 08/12/2019

API Design – Best Practices Page: 1 of 82


Document Control

Document history

Version Version Date Updated by Description of Changes

API Design – Best Practices Page: 2 of 82


TABLE OF CONTENTS

1 Introduction..............................................................................................................................4
1.1 Executive Summary.........................................................................................................4
1.2 Purpose of this document.................................................................................................5
1.3 Intended Audience...........................................................................................................5
2 Planning your API...................................................................................................................6
2.1 Context.............................................................................................................................6
3 Designing the Specification...................................................................................................12
3.1 Versioning......................................................................................................................12
3.2 Spec-Driven Development.............................................................................................15
3.3 Choosing a Spec............................................................................................................19
4 Using RAML.........................................................................................................................22
4.1 Getting Started...............................................................................................................22
4.2 URI Parameters..............................................................................................................24
4.3 Query Parameters...........................................................................................................25
4.4 Responses......................................................................................................................26
4.5 Resource Types..............................................................................................................27
4.6 Traits..............................................................................................................................29
5 Prototyping and Agile Design...............................................................................................30
5.1 Mocking your API.........................................................................................................30
5.2 Getting Feedback...........................................................................................................32
6 Authorizing and Authentication............................................................................................35
6.1 Open Authentication......................................................................................................35
6.2 Generating Tokens.........................................................................................................37
6.3 OAuth2..........................................................................................................................37
7 Designing your resources......................................................................................................43
8 Designing your Methods........................................................................................................54
9 Handling Responses...............................................................................................................62
10 Managing your API with Proxy.........................................................................................67
11 Documenting and Sharing your API..................................................................................79

API Design – Best Practices Page: 3 of 82


1 Introduction

1.1 Executive Summary

The demand for flexibility and extensibility has driven the development
of APIs and tools alike, and in many regards it has never been easier to create
an API than it is today with multitudes of frameworks (such as JAX
RS, Apigility, Django REST Framework, Grape), specs (RAML,
Swagger, API Blueprint, IO Docs), and tools (API Designer, API Science, APImatic)
available.
However, despite the predictability of the demand for APIs, this tidal wave has
taken many by surprise. And while many of these tools are designed to encourage
best practices, API design seems to be constantly overlooked for development
efficiency. The problem is, however, that while this lack of focus on best practices
provides for a rapid development framework, it is nothing more than building a
house without a solid foundation. No matter how quickly you build the house, or
how nice it looks, without a solid foundation it is just a matter of time before the
house crumbles to the ground, costing you more time, energy, and resources then
it would have to simply build it right the first time.
By following best practices, and carefully implementing these standards, while
you may increase development time by weeks, you can shave off months to years
of development headaches and potentially save thousands to hundreds of
thousands of dollars.

What is an API?
n the simplest of terms, API is the acronym for Application Programming
Interface, which is a software intermediary that allows two applications to talk to
each other. In fact, each time you check the weather on your phone, use the
Facebook app or send an instant message, you are using an API.
Every time you use one of these applications, the application on your phone is
connecting to the Internet and sending data to a server. The server then retrieves
that data, interprets it, performs the necessary actions and sends it back to your
phone. The application then interprets that data and presents you with the
information you wanted in a human, readable format.
What an API really does, however, is provide a layer of security. Because you are
making succinct and explicit calls, your phone’s data is never fully exposed to the

API Design – Best Practices Page: 4 of 82


server, and likewise the server is never fully exposed to your phone. Instead, each
communicates with small packets of data, sharing only that which is necessary—
kind of like you ordering food from a drive- 2 2 through window. You tell the
server what you would like to eat, they tell you what they need in return and
then, in the end, you get your meal.

1.2 Purpose of this document

This document presents best practices for Designing and Building APIs using MuleSoft’s
Anypoint Platform.

1.3 Intended Audience

The intended audience for this document includes the Eversana team comprising of the API
Product Owners, Delivery Leads, Enterprise Architects, and Solutions Architects.

API Design – Best Practices Page: 5 of 82


2 Planning your API

2.1 Context

Perhaps the foundation of the foundation, understanding why you are building
an API is a crucial step towards understanding what data/ methods
your API should make accessible and how your users will utilize it. Who are
your API users – are they your customers, or third party services, or developers
who are looking to extend upon your application for their
customers? Understanding the market you are serving is vital to the success of
any product or service.

Key questions to ask:

 Who is our target user for this API


 To Which Actions Do They Need Access?
 What are THEIR use cases for integrating with our API? List out the Actions
 What technologies will they be using to integrate with our API?
 How Are You Going to Maintain Your API?
 How Are You Going to Version Your API
 How Are You Going to Document Your API
 How will Developers Interact with Your API
 How Are You Going to Manage Support

Who is our target user for this API?


As developers, we tend to like to jump to what our API will do before we think
about what our API should do, or even what we want it to do. So, before we get
started, we need to take a step back and ask ourselves, “Who will be using our
API?” Are you building your API for your application’s customers? For their
business partners? For third party developers so that your platform can be
extended upon? Often times the answer tends to be a combination of the above,
but until you understand for whom you are building your API, you aren’t ready to
start planning it.
To Which Actions Do They Need Access?
All too often I hear companies say, “We want to build an API to expose our data,”
or “We want to build an API to get people using our system.” Those are great
goals, but just because we want to do something doesn’t mean we can truly

API Design – Best Practices Page: 6 of 82


accomplish it without a solid plan. After all, we all want people to use our APIs,
but why should they? While it may seem a little harsh, this is the very next
question we need to carefully ask ourselves: Why would our users (as we
identified them above) want to use our API? What benefit does it offer them?
Another common mistake is answering the question of “why?” with, “our
reputation.” Many companies rely on their name rather than their capabilities. If
you want to grow a strong and vibrant developer community, your API has to do
something more than just bear your name. What you should be doing is asking
your potential users, “Which actions would you like to be able to accomplish
through an API?” By speaking directly to your potential API users, you can skip all
the guesswork and instead find out exactly what they are looking for and which
actions they want to be able to take within your API, and also isolate your
company’s value to them. Often, the actions your users will want are different
from the ones you anticipated, and by having this information you can build out
your API while testing it for real use cases. Remember, when building an
application for a customer, we sit down either with them or the business owners
to understand what it is they want us to build. Why aren’t we doing the same
thing with APIs? Why aren’t we sitting down with our customers, the API users,
and involving them in the process from day one? After all, doing so can save us a
lot of time, money and headaches down the road.

List out the actions


Now that you know what your developers want to do, list out the actions. All too
commonly, developers jump into the CRUD mindset, but for this exercise, you
should simply create categories and add actions to them based on what part of
the application or object it will affect.
For example, your developers will probably want access to their users, the ability
to edit a user, reset a password, change permissions, and even add or delete
users. Assuming your application has a messaging system, they may also want to
create a new draft, message a user, check messages, delete messages, etc.
As you isolate these different categories and actions, you’ll want to chart them
like so:

API Design – Best Practices Page: 7 of 82


Now, you may notice that in the above chart we have duplicate entries. For
example, we have “message a user” under both “Users” and “Messages.” This is
actually a good thing, because it shows us how the different actions work
together across different categories, and potentially, different resources.
We now know that there is a viable use case for messages not only within users,
but also within messages, and we can decide under which it makes the most
sense. In the case of “send a message” it would probably make most sense under
the “messages” resource, however because of the relationship, we might want to
include a hypertext link when returning a user object in the “users” resource.
By doing this exercise, not only can we quickly isolate which actions we need to
plan for, but also how they’ll work together and even how we should begin
designing the flow of our API.
This step may seem incredibly simple, but it is one of the most crucial steps in the
planning process to make sure you are accounting for your developers’ needs (I
would even recommend showing your potential API users your chart to see if they
think of anything else after the fact), while also understanding how the different
resources within your API will need to work together, preventing you from having
to rewrite code or try to move things around as you are coding your API.

What technologies will they be using to integrate with our API?

Unless you are starting from ground zero and taking an API-first approach, there’s
a good chance you have other applications and services that your API may need
to interact with. You should take time to focus on how your API and application

API Design – Best Practices Page: 8 of 82


will interact. Remember, your application can change over time, but you’ll want
your API’s interface to remain consistent for as long as possible. You’ll also want
to take time to isolate which services the API will need to interact with. Even
before building the API you can ensure that these services are flexible enough to
talk to your API while keeping it decoupled from your technology stack.
Along with understanding any technical risks involved with these services, if they
are organizationally owned, developers can start focusing on transitioning them
to an API-focused state, ensuring that by the time you are ready to connect them
to your API, they are architecturally friendly. It is never too early to plan ahead.

How Are You Going to Maintain Your API?

Remember that building an API is a long-term commitment, because you won’t


just be creating it, you will be maintaining it as well. It’s very possible to build a
good API that doesn’t need much work after its release, but more typically than
not, especially if you’re an API-driven company, you’ll find that not only are there
bugs to fix, but developers’ demands will increase more and more as they use
your API for their applications. One of the advantages to the Spec-Driven
Development approach to APIs is that you start off by building the foundation of
your API, and then slowly and carefully adding to it after that. This is the
recommended approach, but regardless you shouldn’t plan on just launching it
and leaving it, but rather having dedicated resources that can continue to
maintain, patch bugs, and hopefully continue to build upon your API as new
features are released within your application.

How Are You Going to Version Your API?

Along with maintaining your API, you should also plan on how you are going to
version your API. Will you include versioning in the URL such as
http://api.mysite.com/v1/resource, or will you return it in the content-type
(application/json+v1), or are you planning on creating your own custom
versioning header or taking a different approach altogether?
Keep in mind that your API should be built for the long-term, and as such you
should plan on avoiding versioning as much as possible, however, more likely than
not there will come a time when you need to break backwards incompatibility,
and versioning will be the necessary evil that lets you do so. We’ll talk about

API Design – Best Practices Page: 9 of 82


versioning more, but in essence you should try to avoid it, while still planning for
it—just as you would plan an emergency first aid kit. You really don’t want to
have to use it, but if you do – you’ll be glad you were prepared.

How Are You Going to Document Your API?

Along with maintenance, developers will need access to documentation,


regardless if you are building a hypermedia driven API or not.
And while documentation may seem like a quick and easy task, most companies
will tell you it is one of their biggest challenges and burdens when it comes to
maintaining their API.
As you update your API you will want to update your documentation to reflect
this, and your documentation should have a clear description of the resource, the
different methods, code samples, and a way for developers to try it out.

How will Developers Interact with Your API?

Another important aspect to plan for is how developers will interact with your
API. Will your API be open like the Facebook Graph API, or will you utilize an API
Key? If you’re using an API key, do you plan on provisioning your API to only allow
certain endpoints, or set limits for different types of users? Will developers need
an access token (such as OAuth) in order to access user’s data?
It’s also important to think about security considerations and throttling. How are
you going to protect your developer’s data, and your service architecture? Are
you going to try and do this all yourself, or does it make more sense to take
advantage of a third-party API Manager such as MuleSoft?
The answers to these questions will most likely depend on the requirements that
you have for your API, the layers of security that you want, and the technical
resources and expertise you have at your company. Generally, while API
Management solutions can be pricey, they tend to be cheaper than doing it
yourself.

How Are You Going to Manage Support?

Another consideration, along with documentation is how are you going to


manage support when things go wrong? Are you going to task your engineers

API Design – Best Practices Page: 10 of 82


with API support questions? If so, be warned that while this may work in the
short-term, it is typically not scalable.
Are you going to have a dedicated API support staff? If so, which system(s) will
you use to manage tickets so that support requests do not get lost, can be
followed up on, escalated, and your support staff do not lose their minds.
Are you going to have support as an add-on, a paid service, a partner perk, or will
it be freely available to everyone? If support is only for certain levels or a paid
subscription, will you have an open community (such as a forum) for developers
to ask questions about your API, or will you use a third-party like StackOverflow?
And how will you make sure their questions get answered, and bugs/
documentation issues get escalated to the appropriate source?

API Design – Best Practices Page: 11 of 82


3 Designing the Specification

Once you understand why you are building your API, and what it needs to be able
to accomplish you can start creating the blueprint or spec for your API. Again,
going back to the building a house scenario, by having a plan for how your API
should look structurally before even writing a line of code you can isolate design
flaws and problems without having to course correct in the code.
Using a process called Spec-Driven Development, you will be able to build your
API for the long-term, while also catching glitches, inconsistencies and generally
bad design early on. While this process usually adds 2–4 weeks onto the
development cycle, it can save you months and even years of hassle as you
struggle with poor design, inconsistencies, or worse—find yourself having to build
a brand-new API from scratch.
The idea behind a REST API is simple: it should be flexible enough to endure. That
means as you build your API, you want to plan ahead—not just for this
development cycle, not just for the project roadmap, but for what may exist a
year or two down the road.
This is really where REST excels, because with REST you can take and return
multiple content types (meaning that if something comes along and replaces
JSON, you can adapt) and even be fluid in how it directs the client with
hypermedia. Right off the bat you are being setup for success by choosing the
flexibility of REST. However it’s still important that you go in with the right
mindset—that the API you build will be long-term focused.

3.1 Versioning
Versioning is important to plan for, but all too often companies look at an API the
same way they do desktop software. They create a plan to build an API—calling it
Version 1—and then work to get something that’s just good enough out the door.
But there’s a huge difference between creating a solid foundation that you can
add onto and a half-baked rush job just to have something out there with your
name on it. After all people will remember your name for better or worse.
The second problem is they look at versions as an accomplishment. I remember
one company that jumped from Version 2 to Version 10 just because they
thought it sounded better and made the product look more advanced. But with

API Design – Best Practices Page: 12 of 82


APIs, it’s just the opposite. A good API isn’t on Version 10, it’s still rocking along at
Version 1, because it was that well thought out and designed in the first place. If
you go into your API design with the idea that you are “only” creating Version 1,
you will find that you have done just that—created a version that will be short
lived and really nothing more than a costly experiment from which you hopefully
learned enough to build your “real” API. However, if you follow the above steps
and carefully craft your API with the idea that you are building a foundation that
can be added onto later, and one that will be flexible enough to last, you have a
very good chance of creating an API that lives 2–3 years—or longer!

Think about the time and cost it takes to build an API. Now think about the time it
takes to get developers to adopt an API. By creating a solid API now, you avoid all
of those costs upfront. And in case you’re thinking, “It’s no big deal, we can
version and just get developers to upgrade,” you might want to think again. Any
developer evangelist will tell you one of the hardest things to do is to get
developers to update their code or switch APIs. After all, if it works, why should
they change it? And remember, this is their livelihood we’re talking about—they
can spend time making money and adding new features to their application, or
they can spend time losing money trying to fix the things you broke—which would
you prefer to base your reputation upon?
Versioning an API is not only costly to you and the developer, it also requires
more time on both ends, as you will find yourself managing two different APIs,
supporting two different APIs, and confusing developers in the process. In
essence, when you do version, you are creating the perfect storm.
You should NOT version your API just because you’ve:
 Added new resources
 Added data in the response
 Changed technologies (Java to Ruby)
 Changed your application’s services Remember, your API should be
decoupled from both your technology stack and your service layer so that
as you make changes to your application’s technology, the way the API
interacts with your users is not impacted.
Remember the uniform interface—you are creating separation between your
API and your application so that you are free to develop your application as
needed, the client is able to develop their application as needed, and both are
able to function independently and communicate through the API.
However, you SHOULD consider versioning your API when:

API Design – Best Practices Page: 13 of 82


 You have a backwards-incompatible platform change, such as
completely rewriting your application and completely changing the user
interface
 Your API is no longer extendable—which is exactly what we are trying to
avoid here
 Your spec is out of date (e.g. SOAP)

Understand you are poor at Design

The next thing that’s important to understand is that we, as developers, are poor
at long-term design.
Think about a project you built three years ago, even two years ago, even last
year. How often do you start working on a project only to find yourself getting
stuck at certain points, boxed in by the very code you wrote? How often do you
look back at your old code and ask yourself, “What was I thinking?”
The simple fact is that we can only see what we can see. While we may think we
are thinking through all the possibilities, there’s a good chance we’re missing
something. I can’t tell you how many times I’ve had the chance to do peer-
programming where I would start writing a function or method, and the other
developer would ask why I didn’t just do it in two lines of code instead!? Of
course their way was the right way, and super simple, but my mind was set, and I
had developer tunnel vision—something we all get that is dangerous when it
comes to long-term design.
By accepting that we, by ourselves, are not good at long-term design, we actually
enable ourselves to build better designs. By understanding that we are fallible and
having other developers look for our mistakes (in a productive way), we can
create a better project and a longer-lasting API. After all, two heads are better
than one!
In the past this has been difficult, especially with APIs. Companies struggle to
afford (or even recognize the benefit of) peer programming, and building out
functional mock-ups of an API has proven extremely costly. Thankfully, advances
in technology have made it possible to get feedback—not just from our
coworkers, but also from our potential API users—without having to write a single
line of code! This means that where before we would have to ship to find
inconsistencies and design flaws, now we can get feedback and fix them before
we even start coding our APIs, saving time and money not only in development,
but also support.

API Design – Best Practices Page: 14 of 82


To take advantage of this new technology to the fullest, we can use a
methodology that I am calling Spec-Driven Development, or the development of
our API based on a predefined specification that has been carefully tested and
evaluated by our potential API users.

3.2 Spec-Driven Development


Spec-Driven Development is designed to take advantage of newer technologies in
order to make the development, management and documentation of our API
even more efficient. It does this by first dividing design and development into two
separate processes.
The idea behind Spec-Driven Development is that agility is a good thing, and so is
agile user testing/ experience. However, what we do not want to see is agile
development of the API design when it comes time to write the code. Instead,
Spec-Driven Development encourages separating the design stage from the
development stage, and approaching it iteratively. This means that as you build
out your design using a standardized spec such as RESTful API Modeling Language
(RAML), you can test that spec by mocking it up and getting user feedback.
MuleSoft’s API Contract Design Cycle demonstrates a well-thought-out flow for
perfecting your spec. It begins with designing your API, then moves to
mocking/simulating the API, soliciting feedback, and finally—depending on that
feedback—either determining that the spec is ready for development or returning
to the design stage where the cycle continues.

API Design – Best Practices Page: 15 of 82


Once you have finished getting user feedback and perfecting the design in the
spec, then you can use that specification as your blueprint for design.
In essence, you are keeping agile user testing, and agile development, but splitting
them so you’re not doing agile user testing as you do the actual development (as
your code should be written to match the spec’s design, and thus the previously
tested and affirmed user experience).
It’s important to note that with Spec-Driven Development, there is no back and
forth. Once you move into the development phase, you are moving forward with
the assumption that the spec has been perfected, and that you have eliminated
99 percent of the design flaws/ inconsistencies. Should you find an issue with the
design, rather than correcting it in the development cycle, you need to stop and
go back to the design cycle, where you fix the spec and then retest it.

The reasoning behind this is pretty simple—we’re usually awful at long-term


design. As such, rather than make changes on the fly or try to fix things with a
short-sighted view, Spec-Driven Development encourages “all hands on deck” to
ensure that the changes you make (no matter how simple or insignificant they
might seem) do not cause any design inconsistencies or compounding problems,
either in the short-term or down the road.
In order to be successful with Spec-Driven Development, you should follow these
six constraints:

API Design – Best Practices Page: 16 of 82


Standardized:
Spec-Driven Development encourages the use of a standardized format applicable
to the type of application you are building. In the case of building an API, for
example, the following specs would be considered standard or common among
the industry: RAML, Swagger, API Blueprint, IO Docs. Utilizing a standard spec
ensures easy portability among developers while also ensuring that the spec your
application relies on has been thoroughly tested by the community to ensure that
it will meet both your short-term and long-term needs while maintaining
consistency in its own format.
Consistent:
In developing your spec, you should utilize pattern-driven design as well as code
reuse when possible to ensure that each aspect of your spec is consistent. In the
event of building an API, this would mean ensuring your resources are all
formatted similarly and your methods all operate in a similar format—both in
regards to the request and available responses. The purpose of consistency is to
avoid confusion in both the development and use of your application so all
aspects of the application work similarly, providing the end user with the freedom
to move seamlessly from one focus to another.
Tested:
Spec-Driven Development requires a strong, tested spec in order to build a
reliable application. This means that the spec has to be carefully crafted and then
tested with both internal and external uses to ensure that it accomplishes its
goals and meets the needs of all parties. The spec should be crafted,
mocked/prototyped and tested to retrieve user feedback. Once user feedback is
received, the spec should be modified appropriately, mocked and tested again,
creating a continuous cycle until you have perfected the spec—or at the least
eliminated a large majority of the design issues to ensure spec and application
longevity.
Concrete:
The specification should be the very foundation of your application or, in essence,
the concrete foundation of the house you are building. The spec should
encompass all aspects of your application, providing a solid blueprint to which
your developers can code. The spec does not have to encompass future additions,
but it should have taken as many of them into consideration as possible.
However, there is nothing that relates to the spec that is coded outside of existing
inside of the spec.
Immutable:

API Design – Best Practices Page: 17 of 82


The spec is the blueprint for development and is unchangeable by code. This
means that at no time is the code to deviate from or override the spec. The spec
is the ultimate authority of the application design, since it is the aspect that has
been most thought out and carefully designed, and has also been tested by real-
world users. It is important to realize that short-term coding implementations can
be detrimental to an application’s longevity, and as such have no place in Spec-
Driven Development.

Persistent:
All things evolve, and the application and spec are no different. However, each
evolution must be just as carefully thought out as the original foundation. The
spec can change, but each change must be justified, carefully evaluated, tested
and perfected. In the event of redevelopment, if the spec is found not to be
renderable, it is important to go back and correct the spec by re-engaging in user
testing and validation, and then updating the code to match to ensure that it is
consistent with your spec, while also ensuring that the necessary changes do not
reduce the longevity of your application.

The last constraint of Spec-Driven Development, the Persistent constraint,


explains how you can use Spec-Driven Development to continue building out and
adding additions to your API. For every change you make to the API, you should
start with the design stage, testing your new additions for developers to try out,
and then once validated, start adding the code and pushing the changes to
production. As described above in the Immutable constraint, your API should
never differ from the spec or have resources/methods that are not defined in the
spec.
Along with helping ensure that your API is carefully thought out, usable and
extendable for the long-term, use of these common specs offers several
additional benefits, including the ability auto generate documentation and create
interactive labs for developers to explore your API. There are even Software
Development Kit (SDK) generation services such as APIMatic.io and REST United
that let you build multi-language SDKs or code libraries on the fly from your spec!
This means that not only have you dramatically reduced the number of hours it
will require you to fix bugs and design flaws, handle support or even rebuild your
API, but you are also able to cut down on the number of hours you would be
required to write documentation/create tools for your API, while also making
your API even easier for developers to use and explore.

API Design – Best Practices Page: 18 of 82


Of course, this leads us to choosing the right spec for the job

3.3 Choosing a Spec


Choosing the best specification for your company’s needs will make building,
maintaining, documenting and sharing your API easier. Because Spec-Driven
Development encourages the use of a well tested, standardized spec, it is highly
recommended that you choose from RAML, Swagger or API Blueprint. However,
each of these specs brings along with it unique strengths and weaknesses, so it is
important to understand what your needs are, as well as which specification best
meets those needs.
RAML The baby of the three most popular specs, RAML 0.8, was released in
October 2013. This spec was quickly backed by a strong working group consisting
of members from MuleSoft, PayPal, Intuit, Airware, Akana (formally SOA
Software), Cisco and more. What makes RAML truly unique is that it was
developed to model an API, not just document it. It also comes with powerful
tools including a RAML/ API Designer, an API Console and the API Notebook—a
unique tool that lets developers interact with your API. RAML is also written in the
YAML format, making it easy to read, and easy to edit—regardless of one’s
technical background.
Swagger The oldest and most mature of the specs, Swagger, just recently
released Version 2.0, a version that changes from JSON to the YAML format for
editing and provides more intuitive tooling such as an API Designer and an API
Console. Swagger brings with it the largest community and has recently been
acquired by SmartBear, while also being backed by Apigee and 3Scale. However,
with the transition from JSON to YAML, you may find yourself having to maintain
two different specs to keep documentation and scripts up to date. Swagger also
lacks strong support for design patterns and code reusability—important aspects
of Spec-Driven Development.
API Blueprint Created by Apiary in March of 2013, API Blueprint is designed to
help you build and document your API. However, API Blueprint lacks the tooling
and language support of RAML and Swagger, and utilizes a specialized markdown
format for documenting, making it more difficult to use in the long run. Just the
same, API Blueprint has a strong community and does excel with its
documentation generator. Overall, RAML offers the most support for Spec-Driven
Development and provides interactive tools that are truly unique. You can also

API Design – Best Practices Page: 19 of 82


use’s free mocking service to prototype your API instead of having to install
applications and generate the prototype yourself - as you currently have to do
with Swagger. On the other hand, Swagger offers the most tooling and largest
community.

API Design – Best Practices Page: 20 of 82


Be sure to take a look at the chart on the previous page to see which specification
best matches your business needs. However, I would personally recommend that
unless you have needs that cannot be met by RAML, that you strongly consider
using this spec to define your API. Not only will you have the ability to reuse code
within your spec, but RAML offers the strongest support for Spec-Driven
Development, has strong support for documentation and interaction, has unique,
truly innovative tools and offers strong support for SDK generation and testing.

API Design – Best Practices Page: 21 of 82


4 Using RAML

One of the easiest ways to start working with RAML is with the API Designer, a
free open source tool available on the RAML website at http://raml.org/projects.
To get started even faster, MuleSoft also offers a free, hosted version of its API
Designer. . You can take advantage of this free service by visiting
https://anypoint.mulesoft.com/apiplatform/.

4.1 Getting Started


Because RAML is defined in the YAML (Yet Another Markup Language) format, it is
both human- and machine-readable, and with the API Designer you will receive
auto suggestions, tooltips, and available options, while also having a visual
preview to the right of your screen.

RAML requires that every API have a title, a version and a baseUri. These three
aspects help ensure that your API can be read, versioned and accessed by any
tools you choose to implement. Describing these in RAML is as easy as:

#%RAML 0.8

API Design – Best Practices Page: 22 of 82


title: My Book
version: 1
baseUri: http://server/api/{version}

The nice thing is that the API Designer starts off by providing you the first three
lines, so all you need to add is your baseUri. You’ll also notice that RAML has a
version placeholder, letting you add the version to the URI if desired.
To add a resource to your RAML file, simply declare the resource by using a slash
“/” and the resource name followed by a colon “:” like so:

#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}

/my-resource:

YAML is tab-delimited, so once we have declared /my-resource, we can set the


properties of the resource by tabbing over once.

#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}

/my-resource:
displayName: My Resource
description: this is my resource, it does
things

To add a method, such as GET, POST, PUT, PATCH or DELETE, simply add the name
of the method with a colon:

#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}

/my-resource:

API Design – Best Practices Page: 23 of 82


displayName: My Resource
description: this is my resource, it does things

GET:

POST:

You can then add descriptions, query parameters, responses with examples and
schemas, or even additional nested endpoints, letting you keep all of your
resources grouped together in an easy-to-understand format:

#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}

/my-resource:
displayName: My Resource
description: this is my resource, it does
things
get:
description: this is my GET method
queryparameters:
name:
responses:
200: …
post:
description: this is my post method
/sub-resource:
displayName: Child Resource
description: this is my sub resource

4.2 URI Parameters


Because resources often take dynamic data, such as an ID or even a search filter,
RAML supports URI Parameters/placeholders. To indicate dynamic data (which
can either be defined by RAML or the URI), just use the braces as you did with
{version} in the baseUri:

API Design – Best Practices Page: 24 of 82


#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}

/my-resource:
/sub-resource/{id}:

/{searchFilter}:

4.3 Query Parameters


As shown above, you can easily add query parameters or data that is expected to
be sent to the server from the client when making an API call on that resource’s
method. RAML also lets you describe these parameters and indicate whether or
not they should be required:

/my-resource:
get:
queryParameters:
name:
displayName: Your Name
type: string
description: Your full name
example: Michael Stowe
required: false
dob:
displayName: DOB
type: number
description: Your date of birth
example: 1985 required: true

As you’ve probably noticed, RAML is very straight-forward and uses descriptive


keys. (displayName) for how you want it to be displayed in the documentation, a
description to describe what it does, the type (string, number, etc), an example
for the user to see, and whether or not the parameter is required (boolean).

API Design – Best Practices Page: 25 of 82


4.4 Responses
Likewise, RAML tries to make documenting method responses fairly straight-
forward by first utilizing the responses key and then displaying the type of
responses a person might receive as described by their status code. (We’ll look at
the different codes later.) For example, an OK response has status code 200, so
that might look like this:

/my-resource:
get:
responses:
200:

Within the 200 response we can add the body key to indicate the body content
they would receive back within the 200 response, followed by the content-type
(remember APIs can return back multiple formats), and then we can include a
schema, an example, or both:

/my-resource:
get:
responses:
200:
body:
application/json:
example: |
{
"name" : "Michael Stowe",
"dob" : "1985",
"author" : true
}

To add additional content-types we would simply add a new line with the same
indentation as the “application/json” and declare the new response type in a
similar fashion (e.g.: application/xml or text/xml).
To add additional responses, we can add the response code with the same
indentation as the 200, using the appropriate status code to indicate what
happened and what they will receive.

API Design – Best Practices Page: 26 of 82


As you are doing all of this, be sure to look to the right of your editor to see your
API being formed before your very eyes, letting you try out the different response
types and results.

4.5 Resource Types


As you can imagine, there may be a lot of repetitive code, as you may have
several methods that share similar descriptions, methods, response types (such as
error codes) and other information. One of the nicest features in RAML is
resourceTypes, or a templating engine that lets you define a template (or multiple
templates) for your resource to use across the entire RAML spec, helping you
eliminate repetitive code and ensuring that all of your resources (as long as you
use a standardized template/ resourceType) are uniform.

resourceTypes:
- collection:
description: Collection of available
<<resourcePathName>>
get: description: Get a list of
<<resourcePathName>>.
responses:
200:
body:
application/json:
example: |
<,exampleGetResponse>>
301:
headers:
location:
type: string
example: |
<<exampleGetRedirect>>
400:

/my-resource:
type:
collection:
exampleGetResponse: |

API Design – Best Practices Page: 27 of 82


{
"name" : "Michael Stowe",
"dob" : "1985",
"author" : true
}
exampleGetRedirect: |
http://api.mydomain.com/users/846
/resource-two:
type:
collection:
exampleGetResponse: |
{
"city" : "San Francisco",
"state" : "1985",
"postal" : "94687"
}
exampleGetRedirect: |
http://api.mydomain.com/locations/78

In the above example we first define the resourceType “collection,” and then call
it into our resource using the type property. We are also taking advantage of
three placeholders <<resourcePathName>> that are automatically filled with the
resource name (“my-resource,” “resource-two”), and <<exampleGetResponse>>
and <<exampleGetRedirect>>, which we defined in our resources. Now, instead of
having to write the entire resource each and every time, we can utilize this
template, saving substantial amounts of code and time.

Both “my-resource” and “resource-two” will now have a description and a GET
method with 200, 301 and 400 responses. The 200 response returns back an
application/json response with the example response we provided using the
<<exampleGetResponse>> placeholder, and a redirect in the case of a 301 with
<<exampleGetRedirect>>.

Again, we will get all of this without having to write repetitive code by taking
advantage of resourceTypes

API Design – Best Practices Page: 28 of 82


4.6 Traits

Like resourceTypes, traits allow you to create templates, but specifically for
method behaviors such as isPageable, isFilterable and isSearchable.

traits:
-searchable:
queryParameters:
query:
description: |
JSON array
[{"field1","value1","operator1"},.
..]
<<description>>
example: |
<<example>>

/my-resource:
get:
is: [searchable: {description: "search by location
name", example: "[\"city\"\,\"San Fran\",\"like\"]"}]

To utilize traits, we first define the trait that we want to use, in this case
“searchable” with the query parameters that we want to use, including the
description (using the <<description>> placeholder) and an example (using the
<<example>> placeholder).

However, unlike with resourceTypes, we pass the values for these placeholders in
the searchable array within the “is” array (which can hold multiple traits).

Again, like resourceTypes, traits are designed to help you ensure that your API is
uniform and standard in its behaviors, while also reducing the amount of code
you have to write by encouraging and allowing code reuse.

API Design – Best Practices Page: 29 of 82


5 Prototyping and Agile Design

As you design your spec, one of the most important things you can do is involve
your users, getting crucial feedback to ensure it meets their needs, is consistent
and is easily consumable.

The best way to do this is to prototype your API and have your potential users
interact with it as if it was the actual API you are building. Unfortunately, until
recently this hasn’t been possible due to constraints in time and budget
resources. This has caused companies to utilize a “test it as you release it”
method, where they build the API to what they think their users want, and after
doing internal QA, release it in the hope that they didn’t miss anything. This Wild
West style of building APIs has led to numerous bugs and inconsistencies, and
greatly shortened API lifetimes.

Thankfully, RAML was designed to make this process extremely simple, allowing
us to prototype our API with the click of a button, creating a mock API that relies
on example responses that can be accessed from anywhere in the world by our
users.

Likewise, Swagger and API Blueprint offer some building and mocking tools,
however, right now there isn’t anything quite as simple or easy to use as
MuleSoft’s free mocking service.

5.1 Mocking your API

MuleSoft’s API designer not only provides an intuitive way to visually design your
API, as well as interact and review resources and methods for
completeness/documentation purposes, but it also provides an easy toggle to
quickly build a hosted, mocked version of your API that relies on the “example”
responses.

API Design – Best Practices Page: 30 of 82


To turn on the Mocking service, one only needs to click the “Mocking Service”
toggle switch into the “On” setting:

When set to “On” MuleSoft’s free API Designer will comment out your current
baseUri and replace it with a new, generated one that may be used to make calls
to your mock API.

This new baseUri is public, meaning that your potential users can access it
anywhere in the world, just as if your API was truly live. They will be able to make
GET, POST, PUT, PATCH, DELETE and OPTIONS calls just as they would on your live
API, but nothing will be updated, and they will receive back example data instead.

MuleSoft’s mocking service currently supports RAML and Swagger— although


when importing Swagger it is converted to RAML.

Again, what makes prototyping so important is that your users can actually try out
your API before you even write a line of code, helping you catch any
inconsistencies within the API, such as inconsistencies in resource naming,
method interaction, filter interactions or even in responses

API Design – Best Practices Page: 31 of 82


After all, as you build out your API, you want all of your data to be consistent, and
you want to ensure that all of your user interactions are uniform, letting
developers quickly utilize your API without having to isolate and debug special use
cases where they need to do something unique or different just for that one
particular case.

5.2 Getting Feedback

Once you provide your potential API users with a prototype and the tools to try it
out, the next step is to provide a simple way for them to give you feedback.
Ideally, during this stage you’ll have a dedicated resource such as an API engineer
or a Project Manager that can interact with your testers to not only get their
feedback, but also have conversations to fully understand what it is that they are
trying to do, or what it is that they feel isn’t as usable as it should be. Keep in
mind you’ll also want to encourage your testers to be as open, honest and blunt
as possible, as they may try to be supportive by ignoring issues or sugarcoating
the design flaws that bother them at first—a kind but costly mistake that will
ultimately harm both you and your potential users.

This step provides two valuable resources to the company. First, it provides a
clearer understanding of what it is you need to fix (sometimes the problem isn’t
what a person says, but rather what the person is trying to do), while also telling
your users that you listen to them, creating ownership of your API.

Many companies talk about creating a strong developer community, but the
simplest way is to involve developers from day one. By listening to their feedback
(even if you disagree), you will earn their respect and loyalty—and they will be

API Design – Best Practices Page: 32 of 82


more than happy to share your API with others, since they will be just as proud of
it as you are.

It’s also important to understand that people think and respond differently. For
this reason you’ll want to create test cases that help your testers understand
what it is you are asking of them. However, you should not make them so
restrictive or “by the book” that testers cannot veer off course and try out
“weird” things (as real users of your API will do). This can be as simple as
providing a few API Notebooks that walk developers through different tasks and
then turning them loose on those notebooks to create their own scenarios. Or it
can be as complex as creating a written checklist (as is typically used in user
experience testing).

If you take the more formal route, it’s important to recognize that you will have
both concrete sequentials (“I need it in writing, step by step”) and abstract
randoms (“I want to do this. Oh, and that.” “Hey look—a squirrel!”), and you’ll
want to empower them to utilize their unique personalities and learning/working
styles to provide you with a wide scope of feedback.

Your concrete sequential developers will already do things step by step, but your
abstract randoms are more likely not to go by the book—and that’s okay. Instead
of pushing them back onto the scripted testing process, encourage them to try
other things (by saying things like, “That’s a really interesting use case; I wonder
what would happen if...”) as again, in real life, this is exactly what developers will
do, and this will unlock issues that you never dreamed of.

The purpose of the prototyping process isn’t to validate that your API is ready for
production, but to uncover flaws so that you can make your API ready for
production. Ideally, in this stage you want to find 99 percent of the design flaws
so that your API stands as a solid foundation for future development while also
remaining developer-friendly. For that reason it’s important not to just test what
you’ve already tested in-house, but to let developers test every aspect of your
API. The more transparent your API is, and the more feedback you get from your
potential API users, the more likely you are to succeed in this process.

Remember, there’s nothing wrong with finding problems. At this point, that is the
point. Finding issues now lets you circle back to the design phase and fix them

API Design – Best Practices Page: 33 of 82


before hitting production. Take all of the developers’ feedback to heart—even if
you disagree—and watch out for weird things or common themes.

You’ll know your API is ready for the real world when you send out the prototype
and, after being reviewed by a large group of potential API users (a minimum of
10; 20–50 is ideal), you get back only positive feedback.

API Design – Best Practices Page: 34 of 82


6 Authorizing and Authentication

Another important aspect of APIs for SaaS providers is authentication, or enabling


users to access their accounts via the API. For example, when you visit a site and it
says, “Log in with Facebook,” Facebook is actually providing an API endpoint to
that site to enable them to verify your identity.

Early on, APIs did this through the use of basic authorization, or asking the user
for their username and password, which was then forwarded to the API by the
software consuming it. This, however, creates a huge security risk for multiple
reasons. The first is that it gives the developers of the software utilizing your API
access to your users’ private information and accounts. Even if the developers
themselves are trustworthy, if their software is breached or hacked, usernames
and passwords would become exposed, letting the hacker maliciously use and
access your users’ information and accounts.

6.1 Open Authentication

To help deal with this issue, Open Authentication—or OAuth—a token- based
authorization format was introduced. Unlike basic authorization, OAuth prevents
the API client from accessing the users’ information. Instead it relays the user to a
page on your server where they can enter their credentials, and then returns the
API client an access token for that user.

The huge benefit here is that the token may be deleted at any time in the event of
misuse, a security breach, or even if the user decides they no longer want that
service to have access to their account. Access tokens can also be used to restrict
permissions, letting the user decide what the application should be able to do
with their information/account.

Once again, Facebook is a great example. When you log in to Facebook, a popup
comes up telling you that the application wants to access your account and asking
you to log in with your Facebook credentials. Once this is done it tells you exactly
which permissions the application is requesting, and then lets you decide how it
should respond.

API Design – Best Practices Page: 35 of 82


This example shows you the Facebook OAuth screen for a user who is already
logged in (otherwise it would be asking me to log in), and an application
requesting access to the user’s public profile, Friends list and email address:

Notice that this is a page on Facebook’s server, not on Digg. This means that all
the information transmitted will be sent to Facebook, and Facebook will return an
identifying token back to Digg. In the event I was prompted to enter a
username/password, that information would also be sent to Facebook to
generate the appropriate token, keeping my information secure.

Now you may not need as complex as a login as Facebook or Twitter, but the
principles are the same. You want to make sure that your API keeps your users’
data (usernames and passwords) safe and secure, which means creating a layer of
separation between their information and the client. You should never request
login credentials through public APIs, as doing so makes the user’s information
vulnerable.

API Design – Best Practices Page: 36 of 82


6.2 Generating Tokens

It’s also extremely important to ensure that each token is unique, based both on
the user and the application that it is associated with. Even when role-based
permissions are not required for the application, you still do not want a generic
access token for that user, since you want to give the user the ability to have
control over which applications have access to their account. This also provides an
accountability layer that allows you to use the access tokens to monitor what an
application is doing and watch out for malicious behaviors in the event that they
are hacked.

It’s also smart to add an expiration date to the token, although for most
applications this expiration date should be a number of days, not minutes. In the
case of sensitive data (credit cards, online banking, etc) it makes more sense to
have a very short time window during which the token can be used,, but for other
applications, doing so only inconveniences the user by requiring them to login
again and again. Most access tokens last between 30 and 90 days, but you should
decide the timeframe that works for you.

By having the tokens automatically expire, you are adding another layer of
security in the event that the user forgets to manually remove the token and are
also helping to limit the number of applications that have access to your users’
data. In the event that the user wants that application to be able to access their
account, they would simply reauthorize the app by logging in through the OAuth
panel again.

6.3 OAuth2

In a two-legged OAuth 2 process, the application consuming your API first


prompts the user to log in using your service. This is usually done through the use
of a “log in with” button. Within this button or link is crucial information to
complete the hand shake, including what the application is requesting (a token),
the URI to which your application should respond (such as
http://theirsite.com/oauth.php), the scope or permissions being requested and a
client ID or unique identifier that allows their application to associate your
response with that user.

API Design – Best Practices Page: 37 of 82


Now, when the user clicks the button (or link), they are then redirected to your
website (with all the above information being transmitted), where they are able
to log in and determine which permissions they want the application to have
access to.

Your application then generates an access token based on both the user and the
application requesting access. In other words, the access token is tightly coupled
to both the user and the application, and is unique for this combination. However,
the access token can be independent of access permissions or scope, as you may
choose to let the user dictate (or change) these permissions from within your
application. By having the scope remain changeable or decoupled from the hash
of the token, users are able to have any changes they make regarding the scope
from within your application applied immediately without needing to delete or
regenerate a new token.

The access token created should also have a set expiration (again, usually days,
but this should depend on your API’s needs). This is an additional security
measure that helps protect a user’s information by requiring them to occasionally
reauthorize the application requesting access to act on their behalf. (This is often
as simple as clicking “reauthorize” or “login with....”)

API Design – Best Practices Page: 38 of 82


Once the access token has been generated, your application then responds back
to the URI provided by the application to provide the unique identifier or client ID
and access token that the application may utilize to perform actions or request
information on their behalf.

Because this information is not being handled through signed certificates, it is


important that the information being transmitted is handled over SSL. However,
to be truly secure, BOTH parties must implement SSL. This is something to be
aware of as many API users may not realize this and instead create insecure

API Design – Best Practices Page: 39 of 82


callback URLs such as “http://theirdomain.com/oauth.php” instead of
“https://theirdomain.com/oauth.php.” For this reason, you may want to build in
validation to ensure that you are passing data back to a secured URL in order to
prevent interception of the access token by malicious third parties.

As an added security measure, you can also restrict the access token to the
domain of the calling application.

Once the application receives the access token and client ID or identifier, it can
then store this information in its system, and the handshake is complete until the
access token either expires or is deleted by the user. At that time, should the user
choose to reauthorize the application, the handshake starts back at the beginning.

In a three-legged OAuth process, the flow is the same, with the exception of
having one more party involved (such as an OAuth service provider) who would
then act as the middle leg and provide your application with the information.

OAuth and Security

When implementing OAuth it’s important to understand that it is the only thing
preventing free access to your users’ accounts by the application— and any
malicious users who try to hijack or abuse it.

This means that you need to take a security-first approach when building out your
OAuth interface, and that before building anything on your own it is important to

API Design – Best Practices Page: 40 of 82


first understand your own security needs, and secondly understand the different
security aspects (and vulnerabilities) of OAuth.

Brute Force and Man-in-the-Middle Attacks

Attackers may attempt to use brute force attacks against your OAuth solution or
utilize a man-in-the middle attack (pretending to be your server and sneaking into
the calling application’s system that way).

Improper Storage of Tokens

It’s also important to remember that your users’ information is only as secure as
their access tokens. I’ve already mentioned being sure to make all calls over SSL,
but you should also work with your API users to ensure they are properly and
securely storing access tokens.

Adding OAuth to RAML

The good news is that once you have an OAuth service, adding it to your API’s
definition in RAML, and making it accessible through the different tools available,
is extremely easy.

For OAuth 1, you would simply need to state that it is securedBy oauth_1_0 and
provide a requestTokenUri, an authorizationUri and the tokenCredentialsUri as
shown below in the Twitter RAML example:

securitySchemes:
- oauth_1_0:
type: OAuth 1.0
settings:
requestTokenUri:
https://api.twitter.com/oauth/request_token
authorizationUri:
https://api.twitter.com/oauth/authorize
tokenCredentialsUri:
https://api.twitter.com/oauth/access_token
securedBy: [ oauth_1_0 ]

API Design – Best Practices Page: 41 of 82


For OAuth 2, you would likewise state that it is securedBy oauth_2_0 and provide
the accessTokenUri and authorizationUri. Because OAuth 2 only uses one token,
we are able to combine the requestTokenUri and tokenCredentialsUri URIs into
the same request (accessTokenUri). However, because OAuth 2 utilizes scope, we
will need to add that in using the scope’s property. We’ll also need to add the
information on how to send the access token via the Authorization header:

securitySchemes:
- oauth_2_0:
type: OAuth 2.0
describedBy:
headers:
Authorization:
description: |
Used to send valid access
token
type: string
settings:
authorizationUri:
https://api.instagram.com/oauth/authorize
accessTokenUri:
https://api.instagram.com/oauth/access_token
authorizationGrants: [ code, token ]
scopes:
- basic
- comments
- relationships
- likes
securedBy: [ oauth_2_0 ]

You can learn more about using OAuth within RAML in the RAML spec under
“Security” at http://raml.org/spec.html#security . But thankfully the process of
implementing existing OAuth services into your RAML-based applications is far
simpler than actually creating them, and it makes it easy for your developers to
access real information when debugging or exploring your API.

API Design – Best Practices Page: 42 of 82


7 Designing your resources

Resources are the primary way the client interacts with your API, and as such it’s
important to carefully adhere to best practices when designing them, not only for
usability purposes, but to also ensure that your API is long-lived.

In REST, resources represent either object types within your application or


primary gateways to areas of your application. For example, the /users resource
would be used to interact with and modify user data. In a CRM application you
may have a /users resource that would represent the users of the application, and
a /clients resource that would represent all of the clients in the application. You
might also have a /vendors resource to manage suppliers, /employees to manage
company employees, /tickets to manage customer or sales tickets, etc. Each
resource ties back into a specific section of the CRM, providing a general gateway
to reach and interact with that specific resource.

But what makes REST truly unique is that the resources are designed to be
decoupled from their actions, meaning that you can perform multiple actions on a
single resource. This means that you would be able to create, edit and delete
users all within the /users resource.

Decoupled Architecture

If we look at the constraints of REST, we have to remember that REST is designed


to be a layered system that provides a uniform interface. It must also allow the
server to evolve separately from the client and vice versa.

For this reason, resources are designed to be decoupled from your architecture
and tied not to specific methods or classes, but rather to generalized application
objects. This is a big change from SOAP, where the calls are tied to the class
methods, and RPC, where naming conventions tend to be tightly coupled to the
action you’re taking (getUsers).

By decoupling your resources from your architecture, you are ensuring that you
can change backend services or technologies without impacting how the client
interacts with your API while also providing flexibility in how the client interacts
with your API through the use of explicit methods or representations.

API Design – Best Practices Page: 43 of 82


Use Nouns

One of the best ways to ensure that your resources are decoupled is to think of
them as webpage URIs. For example, if sharing information about your company,
you would probably send the user to an “about” section on your website. This
might look something like “yourdomain.com/about” or
“yourdomain.com/company.”

In the same way, you can build out resources using that same navigational
principle. As mentioned above in the CRM example, users could be directed to the
/users resource, clients to the /clients resource, vendors to /vendors, etc.

Another way to be sure that you are enforcing this navigational style and avoiding
tight coupling of your resources to methods or actions is to utilize nouns for the
name of your resource. If you find yourself using verbs instead, such as
“/getUsers” or “/createVendor,” there’s a good chance you’re creating a tightly
coupled resource that is designed to only perform one or two actions.

Resources should also take advantage of the plural form. For example, /users
represents the full scope of the user object, allowing interaction with both a
collection (multiple records) and an item (a single user). This means that the only
time you would want to take advantage of a singular noun is if the only possible
action that can be taken is specific to a single item or entity. For example, if you
were creating a shopping cart API, you may elect to utilize the singular form
“/cart” for the resource rather than “/carts.” But again, in general, the plural form
will offer you the most flexibility and extendibility, as even when a feature is not
built into your application, or there are current restrictions (for example, only
letting companies have one location) that may change in the future. And the last
thing you want is to have both a plural and singular form of the resource. For
example, imagine users having to decide whether to use /location or /locations.

In other words, only use the singular format when there’s no possibility of the
resource having multiples—a scenario that is extremely rare. After all, even in
the /cart example, you may decide someday to give users multiple shopping carts
(saved carts, business carts, personal carts, etc.). So as you build out your
resources, you should be thinking not just about planning for now, but planning
for what could happen down the road as well.

API Design – Best Practices Page: 44 of 82


Content-types

Resources should also be able to support multiple content-types, or


representations of that resource. One of the most common mistakes developers
are making today is building their API for only one content-type. For example,
they will building out their REST API and only return JSON in the response. This by
itself is not necessarily a bad thing, except that they are failing to inform their
clients that other media types are not allowed (see status codes and errors), and
even more importantly, they have not built out their architecture to be flexible
enough to allow for multiple content-types.

This is very shortsighted, as we forget that only a few years ago XML was king.
And now, just a short while later, it is repeatedly mocked by “progressive
developers,” and the world is demanding JSON (and for good reason).

With the emergence of JSON, many enterprises were caught off guard, stuck
serving XML via SOAP APIs with no new way to meet their customers’ needs. It is
only now that we are seeing many enterprises in a position to provide RESTful
APIs that serve JSON to their customers.

The last thing we want to do is put ourselves in this position again. And with new
specs emerging every day, it is just a matter of time. For example, YAML (Yet
Another Markup Language) is already gaining popularity, and while it may not be
the primary choice for most developers today, that doesn’t mean some of your
most influential clients won’t ask you for it.

By preparing for these types of scenarios, you also put yourself in a position to
meet all of your clients’ needs and provide an extremely flexible and usable API.
By letting developers decide which type of content-type they are utilizing, you let
them quickly and easily implement your API in their current architecture with
formats they are comfortable with. Surprisingly, along with functionality and
flexibility, this is something that many developers are looking for.

Using the Content-type Header

Today, when you browse the Web your client (browser) sends a content- type
header to the server with each data-sending request, telling the server which type
of data it is receiving from the client.

API Design – Best Practices Page: 45 of 82


This same principle can be applied to our HTTP based REST API. For example, you
can use the content-type header to let clients tell your API what format of data
they are sending you, such as: XML (text/xml, application/xml), JSON
(application/JSON), YAML (application/yaml) or any other format that you
support.

Once the server receives this content-type, it not only knows what data it has
received but, if it is a recognized format, how to process it as well. This same
principle can be applied to your API, letting you know which type of data your
client is sending and how to consume it. It also tells you which data format they
are working with.

To go a step further, you can also take a look at the Accept header to see which
type of data they are expecting in return. Hypothetically, when you build out your
architecture your client should be able to send you XML by declaring it in the
content-type and expect JSON in return by declaring a desired JSON response in
the Accept header.

This creates the most flexibility for your API and lets it act as a mediator when
used in conjunction with other APIs. However, it also provides a lot of opportunity
for confusion. Because your API is designed to have a uniform interface, I would
recommend not taking advantage of this wonderful header, but rather relying on
the content-type to determine which data format they are working with, and then
passing back that same format.

XML

Defined by the W3C, XML or the Extensible Markup Language was designed to
present a format that was both machine- and human-readable. Some of the more
common formats of XML include RSS (commonly used for feeds), Atom, XHTML
and, of course, SOAP.

XML also encourages the use of strict schemas and was the choice format for
many enterprises, causing the move to JSON to be more challenging.

However, while descriptive, XML takes up more bandwidth than JSON, and while
commonly used, does not have the same broad language support. While there are

API Design – Best Practices Page: 46 of 82


many libraries for interpreting XML, many of these are used as add-ons rather
than core libraries.

<books>
<book>
<title>This is the Title</title>
<author>Imag E. Nary</author>
<description>
<![CDATA[Once upon a time there was a
great book]]
</description>
<price>12.99</price>
</book>
<book>
<title>Another Book</title>
<author>Imag E. Nary</author>
<description>
<![CDATA[This is the sequel to my other
book]]
</description>
<price>15.99</price>
</book>
</books>

JSON

JSON, or the JavaScript Object Notation, was designed as an alternative to XML,


consisting of key/value pairs in a human-readable format. Originally created by
Douglas Crockford for use within JavaScript, JSON is now a language agnostic,
being described in two separate RFCs, and has quickly become one of the most
commonly used formats due to its broad language support and ability to serialize
objects.

JSON is represented by the application/json content-type and typically takes


advantage of the .json extension.

API Design – Best Practices Page: 47 of 82


{[
{
"title" : "This is the Title",
"author" : "Imag E. Nary ",
"description" : "Once upon a time there was a
great book ",
"price" : "12.99"

},
{
"title" : "Another Book",
"author" : "Imag E. Nary ",
"description" : "This is the sequel to my other
book",
"price" : "15.99"
}
]}

You can also define strict JSON through the use of JSON Schemas, although these
are not as commonly used.

YAML

YAML, or Yet Another Markup Language/YAML Ain’t Markup Language


(depending on if you go by the original meaning or the latter acronym), was
originally created by Clark Evans, Ingy döt Net and Oren Ben-Kiki with the goal of
being a simpler, more human-readable format.

To accomplish this goal, YAML utilizes whitespace to identify properties,


eliminating the need for opening/closing brackets in most instances. However,
support for YAML has been slow, with many languages lacking core libraries for its
serialization and deserialization. This may be due in part to the complex rules
YAML incorporates, providing users with useful shortcuts in building their files
out, but making the actual deserialization process much more difficult.

API Design – Best Practices Page: 48 of 82


While YAML hasn’t been widely adopted as a response format, it has become the
format of choice for API definition and modeling languages, including both RAML
and Swagger.

Books:
-
title: This is the Title
author: Imag E. Nary
description: Once upon a time there was a great
book
price: 12.99

title: Another Book


author: Imag E. Nary
description: |
This is the sequel to my other book
price: 15.99

Versioning

I cannot stress enough that when it comes to building an API, your goal should be
to create one that is so amazing that you can avoid versioning altogether.
However, as hard as we try, there is a good chance (at least with today’s
technology) that at some point in time we will find ourselves having to version our
API for one reason or another.

There have been several suggested methods for versioning, but the first thing to
remember is that, fundamentally, versioning is a bad thing. It’s important to
understand that the lower the version number, the better. In the desktop
software world, we push for that next number, creating Version 1, 2 and—in
some cases—skipping numbers altogether to make the product sound more
advanced than it is! But in the API world, the sign of a success is not having to
version.

This means you should consider avoiding minor versioning altogether, since it
serves no real purpose in an API. Any feature changes should be made

API Design – Best Practices Page: 49 of 82


immediately and seamlessly available to developers for their consumption. The
only strong argument for minor versioning is in tightly coupled SDKs, or saying this
SDK supports the features of Version 1.1, whereas access to recently added
features would require an upgrade to Version 1.2.

You could also make the argument that minor versioning lets developers quickly
know there’s been a change to your API—an argument that makes sense on the
surface. Of course the counter argument is that you may have developers who
misunderstand minor versioning and instead rush to try and upgrade their system
to the new API without needing any of the new features (or while they’re already
taking advantage of them without realizing it). This may result in unnecessary
support calls and emails, as well as confusion (“Can I do this in 1.1 or only in 1.2?
And how do I access version 1.1 instead of 1.2?”).

The other counterpoint to this argument is that if you build a strong developer
community, developers will talk about new features (although not everyone will
be involved in the community), and if you utilize registration to gain an API key
(spoiler alert— you should) you can keep in touch with all of your developers via
email. (They may not be read, but then again, minor versioning in the code might
not be seen either.)

So with this in mind, let’s take a look at the three different mainstream schools of
thought regarding how to version your API.

In the URI

This method includes the version number in the base URI used for API calls,
making developers explicitly call the API version that they want. For example, to
access Version 1 of the API, one would use api.domain.com/v1/resource to access
the API, whereas for Version 2 they would call api.domain.com/v2/resource. This
means that when reading documentation and implementing your API, developers
will be forced to look at the version number, since they may not notice it when
briefly looking over code samples unless it is predominately called out. This makes
this method preferable for APIs that are catering to newer developers.

One argument against the URI method is that it doesn’t allow the API to be
hypermedia driven, or that the content-type method makes this easier. This is
partially because REST is designed to be hypermedia driven and not tightly

API Design – Best Practices Page: 50 of 82


coupled to URIs, which URI versioning does. Also, most hypermedia specs rely on
relative path text links that utilize the base URI, meaning that unless there is an
explicit change made by the developer, the client will always call the same version
of the API, staying within the current realm and not being able to move back and
forth between versions automagically.

However, even with the content-type, we currently have no good way to know
what the client supports, So when calling Version 2 from a client that only
supports certain Version 2 segments, we’re still likely to get back Version 2 links in
the hypertext response, causing the client application to fail.

In other words, the problem cannot be avoided regardless of whether you’re


using the URI or the content-type to denote the version. Although as applications
become more advanced with machine learning and code-on- demand, we may
see this change, but I feel that this can be accommodated regardless of which
method you are using, just perhaps not as cleanly as with the content-type
versioning method.

One advantage of URI versioning is that it tells developers which version they are
using, and is easier for newer developers to implement. It also helps prevent
confusion if developers forget to append the version on top of the content-type
version type (which if using this method should throw an error to prevent
ambiguity).

Of course, it’s also very easy for the base URI to become hidden somewhere in an
include, meaning that developers may not explicitly know which version of the
API they are using without having to dig into their own code. Just the same, the
other methods run this same risk depending on how the client’s application is
architected.

In the Content-type Header

This method is arguably cleaner and far less coupled than the URI method. With
this method, developers would append the version to the content-type, for
example:

Content-type: application/json+v1

API Design – Best Practices Page: 51 of 82


This lets developers quickly modify the version for the calls needed and reinforces
the use of representations to communicate with the server. It also allows for a
more dynamic and hypermedia-led API, as one could implement the Accept
header to return back a specific version. (For example, if I make a call on a V2
feature, I may ask to only return V1 links in the response, as other applications of
my API may not be compatible). Although doing this could create an architectural
nightmare (What happens if you do a create? Do you only return V2 responses for
that item, and then V1 for resources that have a shared relationship? What about
compatibility issues?).

This also raises questions regarding a uniform interface, as you are transitioning
the user between two incompatible versions of your API to accomplish different
things. On the other hand, this may help developers transition from one version
to another, as they can do it over time instead of all at once. Just the same, I can’t
say it is recommended, as I believe that depending on business needs and
implementation, it may cause far more harm than good .

Another issue with the content-type is that developers have to know that they
need to call this out. This means that you have to not only have clear
documentation, but also validation regarding whether or not this information is
provided.

You must also have a central routing mechanism between your two APIs, which
presents a possible domain challenge. Since a key reason you are versioning is
that your current version no longer meets your needs, you are probably not just
rebuilding one section of the API, but rather its very foundation. This may make
taking advantage of the content-type method of versioning far more complex
than having multiple, but explicit, URIs.

Perhaps the biggest benefit of the content-type method is if you have two
different versions of your application (some customers are on V1, some on V2)
and you want to provide an API that can accommodate both. In that case you’re
not really versioning your API, but rather letting customers tell you which version
of your application they’re on so you can provide them with the appropriate data
structures and links. This is an area where the content-type method absolutely
excels.

API Design – Best Practices Page: 52 of 82


Outside of this use case, content-type method falls prey to many of the same
problems as the URI method, in addition to creating more work and opening you
up to “out of the box” use cases (such as people trying to take advantage of the
Accept header in conjunction with the content-type header to go between
versions).

In a Custom Header

The custom header is very similar to the content-type header, with the exception
that those using this method do not believe that the version belongs in the
content-type header, and instead makes sense in a custom header, such as one
called “Version.”

Version: 1

This helps prevent the whole “Accept” conundrum, but it also runs into the same
issues of the content-type header as well as forces developers to veer into the
documentation to understand what your custom header is, eliminating any
chance of standardization (unless everyone decides on a standard custom header,
such as “version”).

This also opens up confusion, as developers may ask how to send a version
number through a header and get multiple answers ranging from other API’s
custom headers to using the content-type header.

For that reason, I cannot recommend using the custom header. And while I
personally agree that the content-type header may not be the best place either, I
think using a pre-existing, standard header is better than creating an offshoot—at
least until a new header is established and standardized for this purpose.

API Design – Best Practices Page: 53 of 82


8 Designing your Methods

Similar to having a class with methods for performing different actions on an


object, REST utilizes methods within the resources. For a web-based REST API that
will be accessed over the HTTP or HTTPS, protocol we can take advantage of the
predefined, standardized HTTP methods. These methods represent specific
actions that can then be tied to the CRUD acronym.

Utilizing CRUD

CRUD stands for Create, Read, Update and Delete and is an acronym commonly
used when referring to database actions. Because databases are data-driven, like
the Web, we can apply these same principles to our API and how our clients will
interact with the methods.

This means that we will be utilizing specific methods when we want to create new
objects within the resource, specific methods for when we want to update
objects, a specific method for reading data and a specific resource for deleting
objects.

However, before we can apply CRUD to our API, we must first understand the
difference between interacting with items verses collections, as well as how each
method affects each one. This is because multiple methods can be used for both
creating and updating, but each method should only be used in specific cases.

Items versus Collections

Before we continue, we need to define the difference between an item and a


collection. An item is a single result set, or a single data object, such as a specific
user. A collection, on the other hand, is a dataset comprised of multiple objects or
multiple users. This differentiation is important because while some methods are
appropriate for collections, other methods are appropriate for dealing with an
item.

For example, when dealing with a collection you have to be very careful when
allowing updates or deletes, as an update on a collection will modify every record
within it, and likewise a delete will erase every single record. This means that if a

API Design – Best Practices Page: 54 of 82


client accidentally made a delete call on a collection, they would effectively (and
accidentally) delete every single user from the system.

For this reason, if you plan to let your users do mass edits/ deletes, it’s always a
good idea to require an additional token in the body to ensure that they are doing
exactly what they are intending. Remember, REST is stateless, so you should not
force them to make multiple calls, as you have no way of carrying over state on
the server side.

For single, pre-existing records, it makes perfect sense to let a user edit or even
delete the record. However it doesn’t make much sense to let them create a new
record from within a specific record. Instead, creation should be reserved for use
on the collection.

While this can be a little confusing at first, with proper documentation and the
use of the OPTIONS method, your API users will be able to quickly identify which
methods are available to them. As they work with your API, this will eventually
become second nature as long as you remain consistent in their usage.

HTTP Methods

You’re probably already quite familiar with HTTP methods, or HTTP action verbs.
In fact, every time you browse the Web with your browser, you are taking
advantage of the different methods—GET when you are requesting a website and
POST when you are submitting a form.

Each HTTP method is designed to tell the server what type of action you want to
take, ranging from requesting data, to creating data, to modifying data, to
deleting data, to finding out which method options are available for that given
collection or item.

Of the six methods we’re going to look at, five can be mapped back to CRUD.
POST is traditionally used to create a new object within a collection, GET is used
to request data in a read format, PUT and PATCH are used primarily for editing
existing data and DELETE is used to delete an object.

However, there is some crossover among the different methods. For example,
while POST is predominately used to create objects, PUT can also be used to

API Design – Best Practices Page: 55 of 82


create an object within a resource—if it doesn’t already exist. Likewise, PUT and
PATCH can both be used to edit existing data, but with very different results. For
that reason it’s important that you understand what each method is intended to
do, and which ones you should use for what purpose. This is also something you’ll
want to explain in your documentation, as many developers today struggle with
understanding how to use PUT to create, as well as the difference between PUT
and PATCH.

GET

The GET HTTP Method is designed explicitly for getting data back from a resource.
This is the most commonly used HTTP Method when making calls to a webpage,
as you are getting back the result set from the server without manipulating it in
any way.

In general, a GET response returns a status code 200 (or ok) unless an error
occurs, and relies on a querystring (domain.com/?page=1) to pass data to the
server.

The GET method should be used any time the client wants to retrieve information
from the server without manipulating that data first.

POST

One of the most versatile HTTP Methods, POST was designed to create or
manipulate data and is used commonly in Web forms. Unlike GET, POST relies on
body or form data being transmitted and not on the query string. As such, you

API Design – Best Practices Page: 56 of 82


should not rely on the query string to transmit POST data, but rather send your
data through form or body data, such as in JSON.

While extremely versatile, and used across the Web to perform many different
functions due to its common acceptance across multiple servers, because we
need an explicit way to define what type of action should be taken within a
resource, it is best to only use the POST method for the creation of an item within
a collection or a result set (as with a multi-filtered search).

When creating an object (the function for which POST should predominately be
used), you will want to return a 201 status code, or Created, as well as the URI of
the created object for the client to reference.

PUT

Less well known is the PUT Method, which is designed to update a resource
(although it can create the resource if it doesn’t exist).

Traditionally, PUT is used to explicitly edit an item, overwriting the object with the
incoming object. When using the PUT method, most developers are not expecting
an object to be created if it doesn’t exist, so taking advantage of this clause within
this method should be done with extreme care to ensure that developers know
exactly how your API uses it. It’s also important that your usage of PUT remains
consistent across all resources. (If it creates an object on one resource, it should
do the same on all the others.)

If you elect to utilize PUT to create an item that doesn’t exist (for example, calling
“/users/1” would create a user with the ID of 1), it is important to return the

API Design – Best Practices Page: 57 of 82


Created status code, or 201. This tells your consumers that a new object was
created that may (or may not) have been intended.

It’s also important to understand that you cannot use PUT to create within the
resource itself. For example, trying a PUT on /users without explicitly stating the
user ID would be a violation of the standardized specification for this spec.

For this reason I would highly recommend not creating an object with PUT, but
rather returning an error informing the client that the object does not exist and
letting them opt to create it using a POST if that was indeed their intention. In this
case, your request would simply return status code 200 (okay) if the data was
successfully modified, or 304 if the data was the same in the call as it was on the
server.

It’s also important to explain to your users that PUT doesn’t just overwrite the
object data that they submit, but all of the object data. For example, if I have a
user with the following structure:

{
"firstName" : "Mike",
"lastName" : "Stowe",
"city" : "San Francisco",
"state" : "CA"
}

And I submit the following request using a PUT:

{"city" : "Oakland"}

The object on the server would be updated as such, reflecting a complete override:

{
"firstName" : "",

API Design – Best Practices Page: 58 of 82


"lastName" : "",
"city" : "Oakland",
"state" : ""
}

Of course, this is traditionally not what the user wants to do, but is the effect of
PUT when used in this case. What the client should do when needing to patch a
portion of the object is to make that same request using PATCH.

PUT should never be used to do a “partial” update.

PATCH

Another lesser-known method, PATCH has created some confusion, as it operates


differently as an HTTP method than when used in bash/shell commands.

In HTTP, PATCH is designed to update only the object properties that have been
provided in the call while leaving the other object properties intact.

Using the same example we did for PUT, with PATCH we would see the following
request:

{"city" : "Oakland"}

Which would return the following data result set from the server:

{
"firstName" : "Mike",
"lastName" : "Stowe",
"city" : "Oakland",
"state" : "CA"
}

Like PUT, a PATCH request would return either a 200 for a successful update or a
304 if the data submitted matched the data already on record— meaning nothing
had changed.

API Design – Best Practices Page: 59 of 82


DELETE

The DELETE Method is fairly straight forward, but also one of the most dangerous
methods out there. Like the PUT and PATCH methods, accidental use of the
DELETE method can wreck havoc across the server. For this reason, like PUT and
PATCH, use of DELETE on a collection (or the main gateway of the resource:
/users) should be disallowed or greatly limited.

When making a DELETE request, the client is instructing the server to


permanently remove that item or collection.

When using a DELETE, you will most likely want to return one of three status
codes. In the event that the item (or collection) has been deleted and you are
returning a content or body response declaring such, you would utilize status
code 200. In the event that the item has been deleted and there is nothing to be
returned back in the body, you would use status code 204. A third status code,
202, may be used if the server has accepted the request and queued the item for
deletion but it has not yet been erased.

OPTIONS

Unlike the other HTTP Methods we’ve talked about, OPTIONS is not mappable to
CRUD, as it is not designed to interact with the data. Instead, OPTIONS is designed

API Design – Best Practices Page: 60 of 82


to communicate to the client which of the methods or HTTP verbs are available to
them on a given item or collection.

Because you may choose not to make every method available on each call they
may make (for example not allowing DELETE on a collection, but allowing it on an
item), the OPTIONS method provides an easy way for the client to query the
server to obtain a quick list of the methods it is allowed to use for that collection
or item.

When responding to the OPTIONS method, you should return back either a 200 (if
providing additional information in the body) or a 204 (if not providing any data
outside of the header fields) unless, ironically, you choose not to implement the
OPTIONS method, which would result in a 405 (Method Not Allowed) error.
However, given that the purpose of the OPTIONS method is to declare which
methods are available for use, I would highly recommend implementing it.

API Design – Best Practices Page: 61 of 82


9 Handling Responses

Since APIs are designed to be consumed, it is important to make sure that the
client, or consumer, is able to quickly implement your API and understand what is
happening. Unfortunately, many APIs make implementation extremely difficult,
defeating their very purpose. As you build out your API you want to ensure that
you not only provide informational documentation to help your developers
integrate/ debug connections, but also return back relevant data whenever a user
makes a call—especially a call that fails.

While having a well formatted, coherent body response is extremely important


(you want something that can easily be deserialized, iterated and understood),
you’ll also want to provide developers with quick references as to what happened
with the call, including the use of status codes. And in the case of a failure, you
will want to provide descriptive error messages that tell the client not just what
went wrong, but how to fix it.

HTTP Status Codes


When implementing REST over HTTP, we are able to take advantage of the HTTP
Status Codes, a structure that most developers are familiar with, (For example,
most developers can tell you that 200 is “okay,” 404 is “page/ resource not
found” and 500 is a server error.

Using the current HTTP status codes prevents us from having to create a new
system that developers must learn, and creates a standard of responses across
multiple APIs, letting developers easily integrate your API with others while using
the same checks.

It is important, however, that you stick to standardized or accepted status codes,


as you’ll find plenty of status codes in use across APIs that do not really exist. This
creates the opportunity for confusion, for example, as Twitter uses its famous 420
status code (Enhance Your Calm) for too many requests, while Java’s Spring
Framework used to return 420 to refer to a method failure (Now Deprecated).

API Design – Best Practices Page: 62 of 82


In this use case Twitter, while opting for a humorous “Easter Egg” response, could
have instead opted for status code 429 (Too Many Requests), and it may have
been wiser for Spring Framework to return a 500 to represent a generic server
error.

As you can see, someone utilizing the Spring Framework at this time while making
a call to Twitter might be confused by what 420 really meant, and whether the
method was not allowed (405) or there was a server error (500) instead of
realizing they simply made too many calls. Imagine how much time and energy
that confusion could cause them in debugging and trying to fix an application that
is already working perfectly.

It’s also important to use status codes because the behavior of the server may be
different from the expected behavior of the client. For example, if a client does a
PUT on an item and the item doesn’t exist, per the RFC the item/object can then
be created on the server—but not all APIs adhere to his idea. As such, by
returning a 201 (Created) instead of a 200 (OK), the client knows that the item did
not previously exist and can choose to delete it (if it was accidentally created) or
update their system with the data to keep everything in sync. Likewise, a 304
response would inform them that nothing was modified (maybe the data was
identical), and a 400 would inform them that it was a bad request that could not
be handled by the server (in the event where you elect not to create the item).

Some of the more common HTTP status codes you may run into, and should
consider using, include the ones described here:

https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html

Handling Errors

Unfortunately, no matter how hard you try and how carefully you document your
API, errors will always be a reality. Developers will skip over documentation,
misunderstand it or simply discover that calls which were previously valid no
longer work.

API Design – Best Practices Page: 63 of 82


Because APIs are designed to be consumed, it is vital that you provide your clients
with the tools, resources and information to help consume them. This means
investing in your error messaging, an aspect of APIs that is often overlooked. In
many cases companies have opted for generic error messages, as they have failed
to consider the long-term ramifications.

Generic error messages tell developers that “something went wrong,” but fails to
tell them exactly what went wrong and how to fix it. This means that developers
must spend hours debugging their code—and your API—in hopes of finding the
answer. Eventually they’ll either just give up (often finding a competitor’s API
instead) or contact support, requiring them to go through everything to track
down oftentimes veiled and abstruse issues.

In the end, it costs you far more to have generic error messages than it does to
provide descriptive error messages that alleviate developer frustration (“Oh, I
know exactly what happened”); reduce development, debug, and integration
time; and reduce support requirements. And by having descriptive error
messages, when support is needed, they will have a good idea of what the issue is
and where to look, saving you resources in that department as well (and keeping
your API support team happy).

An additional advantage of providing descriptive error messages is that generic


messages tend to be ambiguous and confusing. For example, what is the
difference between “restricted access” and “permission denied?” Well, in the
above cases, quite a bit. But someone unfamiliar with your API or naming
conventions may not realize that.

A descriptive error should include an identifier for support (something that is


short, such as a code that they can ask for and be able to look up quickly within
their system), a description of what went wrong (as specific as possible), and a
link to documentation where the developer can read more on the error and how
to fix it.

Descriptive Error Formats

Thankfully, despite their fairly rare usage, descriptive error messaging is nothing
new, and there are several different formats out there that already incorporate

API Design – Best Practices Page: 64 of 82


the above information, providing an easy way to implement descriptive errors in a
standardized and recognized format.

JSON API

JSON API was created to serve as a way of returning back JSON-based response
metadata, including hypertext links (which we’ll discuss in Chapter 12), as well as
handling error bodies.

Rather than returning back just a single error, JSON API, as well as the other error
specs we’ll look at, lets you return back multiple errors in a single response, each
containing a unique identifier, an error code to the correlating HTTP status, a brief
title, a more in-depth message describing the error, and a link to retrieve more
information.

API Design – Best Practices Page: 65 of 82


It is important to note that with JSON API errors, no other body data should be
returned, meaning that the error should be the primary response and not
embedded or returned with other resource body content or JSON.

JSON API Errors allow for the following properties to be used:

Property Usage
error.errors An array containing all of the errors that
occurred (For example, if the form
failed because of missing data, you
could list out which fields are missing
here with an error message for each of
them.)
error.errors[].id A unique identifier for the specific
instance of the error (Please note that
this is not the status code or an
application-specific identifier.)
error.errors[].href A URL that the developer can use to
learn more about the issue (for
example, a link to documentation on
the issue)
error.errors[].status The HTTP status code related to this
specific error, if applicable
error.errors[].code An application-specific code that
identifies the error for logging or
support purposes
error.errors[].title A short, human-readable message that
briefly describes the error
error.errors[].detail A more in-depth, human-readable
description of the error and how to
resolve it

Usability is Key

Remember, one of the keys to your API’s success is usability. It’s easy to return a
“Permission Denied” error to a resource, but it is far more helpful if you return
“Invalid Access Token” with additional resources telling the developer exactly
what the problem is.

API Design – Best Practices Page: 66 of 82


Making your API easy to consume with HTTP Status Codes and Descriptive Error
Messaging may be the difference between an award-winning application on your
API or an award-winning application on your competitor’s.

10 Managing your API with Proxy

Once you have designed and built your API, one of the most crucial aspects is
protecting your system’s architecture and your users’ data, and scaling your API in
order to provide downtime and meet your clients demands.

The easiest and safest way to do this is by implementing an API manager, such as
MuleSoft’s API Gateway. The API manager can then handle API access
(authentication/provisioning), throttling and rate limiting, setting up and handling
SLA tiers and—of course—security.

A hosted API manager also provides an additional layer of separation, as it can


stop DDoS attacks, malicious calls and over-the-limit users from ever reaching
your system’s architecture, while also scaling with demand— meaning the bad
requests are killed at a superficial layer, while valid requests are passed through
to your server.

Of course, you can build your own API manager and host it on a cloud service such
as AWS, but you have to take into consideration both the magnitude and the
importance of what you’re building. Because an API manager is designed to
provide both scalability and security, you’ll need to make sure you have system
architects who excel in both, as one mistake can cost hundreds of thousands—if
not millions—of dollars.

And like any large-scale system architecture, trying to design your own API
manager will most likely prove costly—usually several times the cost of using a
third-party API manager when all is said and done.

API Design – Best Practices Page: 67 of 82


For this reason, I highly recommend choosing an established API management
company such as MuleSoft to protect and scale your API, as well as provide you
with the expertise to help your API continue to grow and live a long, healthy life.

API Access

Controlling access is crucial to the scale and security of your API. By requiring
users to create accounts and obtain API keys or keys for their application, you
retain control. The API key acts as an identifier, letting you know who is accessing
your API, for what, how many times, and even what they are doing with that
access.

By having an API key, you can monitor these behaviors to isolate malicious users
or potential partners who may need a special API tier. If you choose to monetize
your API, monitoring API access will also allow you to identify clients who may
want or need to upgrade to a higher level.

Typically, an API key is generated when a user signs up through your developer
portal and then adds an application that they will be integrating with your API.

API Design – Best Practices Page: 68 of 82


However, this isn’t always the case, as some companies create singular API keys
that can be used across multiple applications. However, by setting it at the
application level, you can see exactly the types of applications for which the client
is utilizing your API, as well as which applications are making the most calls.

OAuth2 and More

A good API manager goes beyond just the basic management of API keys, and will
also help you implement security for restricting access to user information, such
as OAuth2, LDAP, and/ or PingFederate. This lets you take advantage of systems
you are already utilizing, or the flexibility of using a third party service to handle
OAuth if you choose not to build it yourself (remember chapter 6):

Throttling

Throttling and rate limiting allow you to prevent abuse of your API, and ensure
that your API withstands large numbers of calls (including DoS and accidental

API Design – Best Practices Page: 69 of 82


loops). With Throttling, you can set a delay on calls after an SLA tier has been
exceeded, slowing down the number of calls an API client is making.

Rate limiting lets you set a hard number for how many calls the client may make
in a specific time frame. Essentially, if a client is making too many calls, you can
slow down the responses or cut the client off to prevent the system from being
overrun or disrupting your other users.

API Design – Best Practices Page: 70 of 82


This is especially helpful in negating malicious attacks, as well as the dreaded
accidental infinite loop that pounds your API with calls. While this practice may
seem harsh at first, it is widely adopted to ensure the best quality of service for
everyone.

SLA Tiers

SLA tiers, or Service Level Agreements let you set up different rules for different
groups of users. For example, you may have your own mobile apps, premium
partners, paid API users, and standard/free users. You may want to limit the
access of each of these groups to ensure the highest quality engagement for your
users, while also helping prevent loops by inexperienced developers testing out
your API. For example, you can ensure premium partners and your mobile apps
have priority access to the API with the ability to make thousands of calls per
second, while the standard API user may only need four calls per second. This
ensures that the applications needing the most access can quickly obtain it
without any downtime, while your standard users can also rely on your API
without having to worry about someone abusing the system, whether
accidentally or on purpose.

API Design – Best Practices Page: 71 of 82


An API manager that incorporates SLA tiers should let you create the different
levels and specify the number of requests per second users in this tier are
allotted. You should also be able to determine whether or not users of the tier
require your approval. For example, you may offer automatic approval to
premium partners, but require special agreements and manual intervention in
order for basic or free users to take advantage of the higher throughput.

Once you have set up your SLA tiers, you should be able to assign applications to
that tier.

Analytics

Another valuable tool your API manager should provide you is analytics, letting
you quickly see which of your APIs are the most popular and where your calls are
coming from. These analytics can help you identify which types of devices (OS)

API Design – Best Practices Page: 72 of 82


the clients are using, the top applications using your API and the geographical
location of those accessing your API.

Along with identifying your API’s usage trends, as well as being able to monitor
API uptime/spikes/response times (especially in regards to your own server
architecture), analytics also help you prove the business use case for your API, as
you are able to show both its popularity and how it is being used.

These metrics are especially key when making business decisions, reporting to
business owners/stakeholders, and designing a Developer Relations/Evangelism
program (as often times API Evangelist’s performances are based on the number
of API keys created and the number of calls to an API).

API Design – Best Practices Page: 73 of 82


Security

Security is an essential element of any application, especially in regards to APIs,


where you have hundreds, to thousands, to hundreds of thousands of
applications making calls on a daily basis.

Every day, new threats and vulnerabilities are created, and every day, companies
find themselves racing against the clock to patch them. Thankfully, while an API
manager doesn’t eliminate all threats, it can help protect you against some of the
most common ones. And when used as a proxy, it can prevent malicious attacks
from hitting your architecture.

It’s important to understand that when it comes to security, you can pay a little
up front now, or a lot later. After all, according to Stormpath, in 2013 the average
cost of a personal information breach was $5.5 million. When Sony’s PlayStation
network was hacked, exposing 77 million records, the estimated cost to Sony was
$171 million for insurance, customer support, rebuilding user management and
security systems.

This is in part why you should seriously consider using a pre-established, tested
API manager instead of trying to build your own, because not only do you have
the development costs, but also the risks that go along with it. When it comes to
security, if you don’t have the expertise in building these types of systems, it’s
always best to let those with expertise do it for you.

Cross-Origin Resource Sharing

Cross-Origin Resource Sharing, or CORS, allows resources (such as JavaScript) to


be used by a client outside of the host domain. By default, most browsers have
strict policies to prevent cross-domain HTTP requests via JavaScript due to the
security risks they can pose.

CORS lets you enable these cross-domain calls, while also letting you specify host
limitations (for example, only calls from x-host will be allowed; all other hosts will

API Design – Best Practices Page: 74 of 82


be denied) using the Access-Control-Allow-Origin header. The idea is that this will
help reduce the security risks by only letting certain hosts take advantage of this
functionality.

With that said, simply restricting hosts does not ensure that your application will
be safe from JavaScript attacks, as clients can easily manipulate the JavaScript
code themselves using freely available browser- based tools (such as Firebug or
Inspector).

Being client-side, CORS can also present other security risks. For example, if a
developer chooses to control your API via client-based JavaScript instead of using
a server-side language, they may expose their API key, access tokens and other
secret information.

As such, it’s important to understand how users will be utilizing CORS to access
your API to ensure that secret or detrimental information is not leaked.

Keep in mind that every API manager operates differently. Some API managers
may let you set up a list of allowed hosts, while others may issue a blanket
statement allowing every host to make cross-origin requests.

As such, the golden rule for CORS, unless you have a specific-use case, is to leave
it disabled until you have a valid, well-thought-out reason for enabling it.

API Design – Best Practices Page: 75 of 82


IP Whitelisting/Blacklisting

While IP whitelisting/blacklisting is not an acceptable method for securing an API


by itself (as IP addresses can be spoofed and hackers often times use multiple
addresses), it does help provide an added layer of security to prevent or allow
known IPs based on previous interactions. For example, you can choose to
whitelist your or your partner’s dedicated IPs, or blacklist IPs that have a history
of making malicious calls against your server or API. You can also block IPs from
users that have abused your API to prevent their server from communicating with
it (something they could easily do if you have a free tier by simply creating a new
account with a different email).

As we look at IP whitelisting/blacklisting, it’s important to remember that security


functions in layers. For example, if you think of a prison, it’s important to have a
multitude of security clearances so that if by chance someone bypasses the first
layer, they will be stopped at the second or third. In the same way, we need to
layer our security processes for our API, both on the API management/proxy side,
and within our architecture.

API Design – Best Practices Page: 76 of 82


IP whitelisting/blacklisting fits very nicely into this layered system and can help kill
some calls before running any additional checks on them. But again, it’s important
to remember that IP addresses may change and can be spoofed, meaning you
may not always be allowing “good” users, and you may not always be stopping
malicious traffic. However, if you are selective in this process, the chances that
you will disrupt well-intentioned users from interacting with your API are pretty
slim.

XML Threat Protection

With the growth and focus of SOA in enterprise architectures, hackers worked to
find a new way to exploit vulnerabilities, often times by injecting malicious code
into the data being passed through the system. In the case of XML services, users
with malicious intent could build the XML data in such a way as to exhaust server
memory, hijack resources, brute force passwords, perform data tampering, inject
SQL, or even embed malicious files or code.

In order to exhaust server memory, these attackers might create large and
recursive payloads, something that you can help prevent by limiting the length of
the XML object in your API manager, as well as how deep into the levels the XML
may go.

Along with memory attacks, malicious hackers may also try to push through
malicious XPath/XSLT or SQL injections in attempt to get the API layer to pass
along more details than desired to services or the database.

Malicious attacks may also include embedding system commands in the XML
Object by using CDATA or including malicious files within the XML payload.

Of course, there are several more XML-based attacks that can be utilized to wreak
havoc on an API and the underlying architecture, which is why having XML threat
protection in place is key to ensuring the safety of your API, your application and
your users. Again, since security is built in layers, while the API manager can help
prevent some of these threats, monitor for malicious code and limit the size of
the XML payload or the depth it can travel, you will still want to be meticulous in
building your services architecture to ensure that you are eliminating threats like
SQL and code injection on the off chance they are missed by your API gateway.

API Design – Best Practices Page: 77 of 82


JSON Threat Protection

Similar to XML, JSON falls victim to several of the same malicious attacks.
Attackers can easily bloat the JSON and add recursive levels to tie up memory, as
well as inject malicious code or SQL that they anticipate your application will run.

As with XML threat protection, you will want to limit the size and depth of the
JSON payload, as well as constantly be on the watch for security risks that might
make it through the API gateway including SQL injections and malicious code the
user wants you to evaluate.

MuleSoft’s API Manager lets you set up custom policies to help prevent XML and
JSON Threat Protections:

API Design – Best Practices Page: 78 of 82


11 Documenting and Sharing your API

Since the goal of any API is developer implementation, it’s vital not to forget one
of your most important assets—one that you should be referencing both in error
messages and possibly even in your hypermedia responses: documentation.

Documentation is one of the most important factors in determining an API’s


success, as strong, easy-to-understand documentation makes API implementation
a breeze, while confusing, out-of-sync, incomplete or convoluted documentation
makes for an unwelcome adventure—one that usually leads to frustrated
developers utilizing competitor’s solutions.

Unfortunately, documentation can be one of the greatest challenges, as up until


now we have been required to write the documentation as we go, try to pull it
from code comments or put it together after the API has been developed.

The challenge is that not only should your documentation be consistent in its
appearance, but also consistent with the functionality of your API and in sync with
the latest changes. Your documentation should also be easily understood and
written for developers (typically by an experienced documentation team).

Until recently, solutions for documentation have included expensive third- party
systems, the use of the existing CMS (Content Management System), or even
dedicated CMS’s based on open source software such as Drupal/WordPress.

The challenge is that while expensive API documentation-specific solutions may


provide consistency regarding the look and feel of your API (something harder to
maintain with a CMS), they still rely on the manual effort of the developer (if
derived from the code) or a documentation team to keep them in sync.

However, with the expansion of open specs such as RAML—and the communities
surrounding them—documentation has become incredibly easier. Instead of

API Design – Best Practices Page: 79 of 82


trying to parse code comments and have inline descriptions written (usually) by
developers, the documentation team is still able to provide descriptive
documentation in the spec, and all code parameters/examples are already
included, making the transition to documentation a snap.

And with the explosion of API documentation software-as-a-service (SaaS)


companies that utilize and expand on these specs, creating an effective API portal
and documentation has never been easier or less expensive.

However, before we jump into the different documentation tools, it’s important
to understand what makes for good documentation.

Writing Good Documentation

Good documentation should act as both a reference and an educator, letting


developers quickly obtain the information they are looking for at a glance, while
also reading through the documentation to glean an understanding of how to
integrate the resource/method they are looking at.

As such, good documentation should be clear and concise, but also visual,
providing the following:

 A clear explanation of what the method/resource does


 Call outs that share important information with developers, including

warnings and errors

 A sample call with the correlating media type body


 A list of parameters used on this resource/method, as well as their

types, special formatting, rules and whether or not they are required

 A sample response, including media type body


 Code examples for multiple languages including all necessary code

(e.g. Curl with PHP, as well as examples for Java, .Net, Ruby, etc.)

 SDK examples (if SDKs are provided) showing how to access the

API Design – Best Practices Page: 80 of 82


resource/method utilizing the SDK for their language

 Interactive experiences to try/test API calls (API Console, API

Notebook)

 Frequently asked questions/scenarios with code examples


 Links to additional resources (other examples, blogs, etc.)
 A comments section where users can share/discuss code
 Other support resources (forums, contact forms, etc.)

API Design – Best Practices Page: 81 of 82


API Design – Best Practices Page: 82 of 82

You might also like