Eversana API Design BestPractices
Eversana API Design BestPractices
A document that describes the Best Practices for API design for your
enterprise
C4E Leader:
Version: 0.1
Date: 08/12/2019
Document history
1 Introduction..............................................................................................................................4
1.1 Executive Summary.........................................................................................................4
1.2 Purpose of this document.................................................................................................5
1.3 Intended Audience...........................................................................................................5
2 Planning your API...................................................................................................................6
2.1 Context.............................................................................................................................6
3 Designing the Specification...................................................................................................12
3.1 Versioning......................................................................................................................12
3.2 Spec-Driven Development.............................................................................................15
3.3 Choosing a Spec............................................................................................................19
4 Using RAML.........................................................................................................................22
4.1 Getting Started...............................................................................................................22
4.2 URI Parameters..............................................................................................................24
4.3 Query Parameters...........................................................................................................25
4.4 Responses......................................................................................................................26
4.5 Resource Types..............................................................................................................27
4.6 Traits..............................................................................................................................29
5 Prototyping and Agile Design...............................................................................................30
5.1 Mocking your API.........................................................................................................30
5.2 Getting Feedback...........................................................................................................32
6 Authorizing and Authentication............................................................................................35
6.1 Open Authentication......................................................................................................35
6.2 Generating Tokens.........................................................................................................37
6.3 OAuth2..........................................................................................................................37
7 Designing your resources......................................................................................................43
8 Designing your Methods........................................................................................................54
9 Handling Responses...............................................................................................................62
10 Managing your API with Proxy.........................................................................................67
11 Documenting and Sharing your API..................................................................................79
The demand for flexibility and extensibility has driven the development
of APIs and tools alike, and in many regards it has never been easier to create
an API than it is today with multitudes of frameworks (such as JAX
RS, Apigility, Django REST Framework, Grape), specs (RAML,
Swagger, API Blueprint, IO Docs), and tools (API Designer, API Science, APImatic)
available.
However, despite the predictability of the demand for APIs, this tidal wave has
taken many by surprise. And while many of these tools are designed to encourage
best practices, API design seems to be constantly overlooked for development
efficiency. The problem is, however, that while this lack of focus on best practices
provides for a rapid development framework, it is nothing more than building a
house without a solid foundation. No matter how quickly you build the house, or
how nice it looks, without a solid foundation it is just a matter of time before the
house crumbles to the ground, costing you more time, energy, and resources then
it would have to simply build it right the first time.
By following best practices, and carefully implementing these standards, while
you may increase development time by weeks, you can shave off months to years
of development headaches and potentially save thousands to hundreds of
thousands of dollars.
What is an API?
n the simplest of terms, API is the acronym for Application Programming
Interface, which is a software intermediary that allows two applications to talk to
each other. In fact, each time you check the weather on your phone, use the
Facebook app or send an instant message, you are using an API.
Every time you use one of these applications, the application on your phone is
connecting to the Internet and sending data to a server. The server then retrieves
that data, interprets it, performs the necessary actions and sends it back to your
phone. The application then interprets that data and presents you with the
information you wanted in a human, readable format.
What an API really does, however, is provide a layer of security. Because you are
making succinct and explicit calls, your phone’s data is never fully exposed to the
This document presents best practices for Designing and Building APIs using MuleSoft’s
Anypoint Platform.
The intended audience for this document includes the Eversana team comprising of the API
Product Owners, Delivery Leads, Enterprise Architects, and Solutions Architects.
2.1 Context
Perhaps the foundation of the foundation, understanding why you are building
an API is a crucial step towards understanding what data/ methods
your API should make accessible and how your users will utilize it. Who are
your API users – are they your customers, or third party services, or developers
who are looking to extend upon your application for their
customers? Understanding the market you are serving is vital to the success of
any product or service.
Unless you are starting from ground zero and taking an API-first approach, there’s
a good chance you have other applications and services that your API may need
to interact with. You should take time to focus on how your API and application
Along with maintaining your API, you should also plan on how you are going to
version your API. Will you include versioning in the URL such as
http://api.mysite.com/v1/resource, or will you return it in the content-type
(application/json+v1), or are you planning on creating your own custom
versioning header or taking a different approach altogether?
Keep in mind that your API should be built for the long-term, and as such you
should plan on avoiding versioning as much as possible, however, more likely than
not there will come a time when you need to break backwards incompatibility,
and versioning will be the necessary evil that lets you do so. We’ll talk about
Another important aspect to plan for is how developers will interact with your
API. Will your API be open like the Facebook Graph API, or will you utilize an API
Key? If you’re using an API key, do you plan on provisioning your API to only allow
certain endpoints, or set limits for different types of users? Will developers need
an access token (such as OAuth) in order to access user’s data?
It’s also important to think about security considerations and throttling. How are
you going to protect your developer’s data, and your service architecture? Are
you going to try and do this all yourself, or does it make more sense to take
advantage of a third-party API Manager such as MuleSoft?
The answers to these questions will most likely depend on the requirements that
you have for your API, the layers of security that you want, and the technical
resources and expertise you have at your company. Generally, while API
Management solutions can be pricey, they tend to be cheaper than doing it
yourself.
Once you understand why you are building your API, and what it needs to be able
to accomplish you can start creating the blueprint or spec for your API. Again,
going back to the building a house scenario, by having a plan for how your API
should look structurally before even writing a line of code you can isolate design
flaws and problems without having to course correct in the code.
Using a process called Spec-Driven Development, you will be able to build your
API for the long-term, while also catching glitches, inconsistencies and generally
bad design early on. While this process usually adds 2–4 weeks onto the
development cycle, it can save you months and even years of hassle as you
struggle with poor design, inconsistencies, or worse—find yourself having to build
a brand-new API from scratch.
The idea behind a REST API is simple: it should be flexible enough to endure. That
means as you build your API, you want to plan ahead—not just for this
development cycle, not just for the project roadmap, but for what may exist a
year or two down the road.
This is really where REST excels, because with REST you can take and return
multiple content types (meaning that if something comes along and replaces
JSON, you can adapt) and even be fluid in how it directs the client with
hypermedia. Right off the bat you are being setup for success by choosing the
flexibility of REST. However it’s still important that you go in with the right
mindset—that the API you build will be long-term focused.
3.1 Versioning
Versioning is important to plan for, but all too often companies look at an API the
same way they do desktop software. They create a plan to build an API—calling it
Version 1—and then work to get something that’s just good enough out the door.
But there’s a huge difference between creating a solid foundation that you can
add onto and a half-baked rush job just to have something out there with your
name on it. After all people will remember your name for better or worse.
The second problem is they look at versions as an accomplishment. I remember
one company that jumped from Version 2 to Version 10 just because they
thought it sounded better and made the product look more advanced. But with
Think about the time and cost it takes to build an API. Now think about the time it
takes to get developers to adopt an API. By creating a solid API now, you avoid all
of those costs upfront. And in case you’re thinking, “It’s no big deal, we can
version and just get developers to upgrade,” you might want to think again. Any
developer evangelist will tell you one of the hardest things to do is to get
developers to update their code or switch APIs. After all, if it works, why should
they change it? And remember, this is their livelihood we’re talking about—they
can spend time making money and adding new features to their application, or
they can spend time losing money trying to fix the things you broke—which would
you prefer to base your reputation upon?
Versioning an API is not only costly to you and the developer, it also requires
more time on both ends, as you will find yourself managing two different APIs,
supporting two different APIs, and confusing developers in the process. In
essence, when you do version, you are creating the perfect storm.
You should NOT version your API just because you’ve:
Added new resources
Added data in the response
Changed technologies (Java to Ruby)
Changed your application’s services Remember, your API should be
decoupled from both your technology stack and your service layer so that
as you make changes to your application’s technology, the way the API
interacts with your users is not impacted.
Remember the uniform interface—you are creating separation between your
API and your application so that you are free to develop your application as
needed, the client is able to develop their application as needed, and both are
able to function independently and communicate through the API.
However, you SHOULD consider versioning your API when:
The next thing that’s important to understand is that we, as developers, are poor
at long-term design.
Think about a project you built three years ago, even two years ago, even last
year. How often do you start working on a project only to find yourself getting
stuck at certain points, boxed in by the very code you wrote? How often do you
look back at your old code and ask yourself, “What was I thinking?”
The simple fact is that we can only see what we can see. While we may think we
are thinking through all the possibilities, there’s a good chance we’re missing
something. I can’t tell you how many times I’ve had the chance to do peer-
programming where I would start writing a function or method, and the other
developer would ask why I didn’t just do it in two lines of code instead!? Of
course their way was the right way, and super simple, but my mind was set, and I
had developer tunnel vision—something we all get that is dangerous when it
comes to long-term design.
By accepting that we, by ourselves, are not good at long-term design, we actually
enable ourselves to build better designs. By understanding that we are fallible and
having other developers look for our mistakes (in a productive way), we can
create a better project and a longer-lasting API. After all, two heads are better
than one!
In the past this has been difficult, especially with APIs. Companies struggle to
afford (or even recognize the benefit of) peer programming, and building out
functional mock-ups of an API has proven extremely costly. Thankfully, advances
in technology have made it possible to get feedback—not just from our
coworkers, but also from our potential API users—without having to write a single
line of code! This means that where before we would have to ship to find
inconsistencies and design flaws, now we can get feedback and fix them before
we even start coding our APIs, saving time and money not only in development,
but also support.
Persistent:
All things evolve, and the application and spec are no different. However, each
evolution must be just as carefully thought out as the original foundation. The
spec can change, but each change must be justified, carefully evaluated, tested
and perfected. In the event of redevelopment, if the spec is found not to be
renderable, it is important to go back and correct the spec by re-engaging in user
testing and validation, and then updating the code to match to ensure that it is
consistent with your spec, while also ensuring that the necessary changes do not
reduce the longevity of your application.
One of the easiest ways to start working with RAML is with the API Designer, a
free open source tool available on the RAML website at http://raml.org/projects.
To get started even faster, MuleSoft also offers a free, hosted version of its API
Designer. . You can take advantage of this free service by visiting
https://anypoint.mulesoft.com/apiplatform/.
RAML requires that every API have a title, a version and a baseUri. These three
aspects help ensure that your API can be read, versioned and accessed by any
tools you choose to implement. Describing these in RAML is as easy as:
#%RAML 0.8
The nice thing is that the API Designer starts off by providing you the first three
lines, so all you need to add is your baseUri. You’ll also notice that RAML has a
version placeholder, letting you add the version to the URI if desired.
To add a resource to your RAML file, simply declare the resource by using a slash
“/” and the resource name followed by a colon “:” like so:
#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}
/my-resource:
#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}
/my-resource:
displayName: My Resource
description: this is my resource, it does
things
To add a method, such as GET, POST, PUT, PATCH or DELETE, simply add the name
of the method with a colon:
#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}
/my-resource:
GET:
POST:
You can then add descriptions, query parameters, responses with examples and
schemas, or even additional nested endpoints, letting you keep all of your
resources grouped together in an easy-to-understand format:
#%RAML 0.8
title: My Book
version: 1
baseUri: http://server/api/{version}
/my-resource:
displayName: My Resource
description: this is my resource, it does
things
get:
description: this is my GET method
queryparameters:
name:
responses:
200: …
post:
description: this is my post method
/sub-resource:
displayName: Child Resource
description: this is my sub resource
/my-resource:
/sub-resource/{id}:
/{searchFilter}:
/my-resource:
get:
queryParameters:
name:
displayName: Your Name
type: string
description: Your full name
example: Michael Stowe
required: false
dob:
displayName: DOB
type: number
description: Your date of birth
example: 1985 required: true
/my-resource:
get:
responses:
200:
Within the 200 response we can add the body key to indicate the body content
they would receive back within the 200 response, followed by the content-type
(remember APIs can return back multiple formats), and then we can include a
schema, an example, or both:
/my-resource:
get:
responses:
200:
body:
application/json:
example: |
{
"name" : "Michael Stowe",
"dob" : "1985",
"author" : true
}
To add additional content-types we would simply add a new line with the same
indentation as the “application/json” and declare the new response type in a
similar fashion (e.g.: application/xml or text/xml).
To add additional responses, we can add the response code with the same
indentation as the 200, using the appropriate status code to indicate what
happened and what they will receive.
resourceTypes:
- collection:
description: Collection of available
<<resourcePathName>>
get: description: Get a list of
<<resourcePathName>>.
responses:
200:
body:
application/json:
example: |
<,exampleGetResponse>>
301:
headers:
location:
type: string
example: |
<<exampleGetRedirect>>
400:
/my-resource:
type:
collection:
exampleGetResponse: |
In the above example we first define the resourceType “collection,” and then call
it into our resource using the type property. We are also taking advantage of
three placeholders <<resourcePathName>> that are automatically filled with the
resource name (“my-resource,” “resource-two”), and <<exampleGetResponse>>
and <<exampleGetRedirect>>, which we defined in our resources. Now, instead of
having to write the entire resource each and every time, we can utilize this
template, saving substantial amounts of code and time.
Both “my-resource” and “resource-two” will now have a description and a GET
method with 200, 301 and 400 responses. The 200 response returns back an
application/json response with the example response we provided using the
<<exampleGetResponse>> placeholder, and a redirect in the case of a 301 with
<<exampleGetRedirect>>.
Again, we will get all of this without having to write repetitive code by taking
advantage of resourceTypes
Like resourceTypes, traits allow you to create templates, but specifically for
method behaviors such as isPageable, isFilterable and isSearchable.
traits:
-searchable:
queryParameters:
query:
description: |
JSON array
[{"field1","value1","operator1"},.
..]
<<description>>
example: |
<<example>>
/my-resource:
get:
is: [searchable: {description: "search by location
name", example: "[\"city\"\,\"San Fran\",\"like\"]"}]
To utilize traits, we first define the trait that we want to use, in this case
“searchable” with the query parameters that we want to use, including the
description (using the <<description>> placeholder) and an example (using the
<<example>> placeholder).
However, unlike with resourceTypes, we pass the values for these placeholders in
the searchable array within the “is” array (which can hold multiple traits).
Again, like resourceTypes, traits are designed to help you ensure that your API is
uniform and standard in its behaviors, while also reducing the amount of code
you have to write by encouraging and allowing code reuse.
As you design your spec, one of the most important things you can do is involve
your users, getting crucial feedback to ensure it meets their needs, is consistent
and is easily consumable.
The best way to do this is to prototype your API and have your potential users
interact with it as if it was the actual API you are building. Unfortunately, until
recently this hasn’t been possible due to constraints in time and budget
resources. This has caused companies to utilize a “test it as you release it”
method, where they build the API to what they think their users want, and after
doing internal QA, release it in the hope that they didn’t miss anything. This Wild
West style of building APIs has led to numerous bugs and inconsistencies, and
greatly shortened API lifetimes.
Thankfully, RAML was designed to make this process extremely simple, allowing
us to prototype our API with the click of a button, creating a mock API that relies
on example responses that can be accessed from anywhere in the world by our
users.
Likewise, Swagger and API Blueprint offer some building and mocking tools,
however, right now there isn’t anything quite as simple or easy to use as
MuleSoft’s free mocking service.
MuleSoft’s API designer not only provides an intuitive way to visually design your
API, as well as interact and review resources and methods for
completeness/documentation purposes, but it also provides an easy toggle to
quickly build a hosted, mocked version of your API that relies on the “example”
responses.
When set to “On” MuleSoft’s free API Designer will comment out your current
baseUri and replace it with a new, generated one that may be used to make calls
to your mock API.
This new baseUri is public, meaning that your potential users can access it
anywhere in the world, just as if your API was truly live. They will be able to make
GET, POST, PUT, PATCH, DELETE and OPTIONS calls just as they would on your live
API, but nothing will be updated, and they will receive back example data instead.
Again, what makes prototyping so important is that your users can actually try out
your API before you even write a line of code, helping you catch any
inconsistencies within the API, such as inconsistencies in resource naming,
method interaction, filter interactions or even in responses
Once you provide your potential API users with a prototype and the tools to try it
out, the next step is to provide a simple way for them to give you feedback.
Ideally, during this stage you’ll have a dedicated resource such as an API engineer
or a Project Manager that can interact with your testers to not only get their
feedback, but also have conversations to fully understand what it is that they are
trying to do, or what it is that they feel isn’t as usable as it should be. Keep in
mind you’ll also want to encourage your testers to be as open, honest and blunt
as possible, as they may try to be supportive by ignoring issues or sugarcoating
the design flaws that bother them at first—a kind but costly mistake that will
ultimately harm both you and your potential users.
This step provides two valuable resources to the company. First, it provides a
clearer understanding of what it is you need to fix (sometimes the problem isn’t
what a person says, but rather what the person is trying to do), while also telling
your users that you listen to them, creating ownership of your API.
Many companies talk about creating a strong developer community, but the
simplest way is to involve developers from day one. By listening to their feedback
(even if you disagree), you will earn their respect and loyalty—and they will be
It’s also important to understand that people think and respond differently. For
this reason you’ll want to create test cases that help your testers understand
what it is you are asking of them. However, you should not make them so
restrictive or “by the book” that testers cannot veer off course and try out
“weird” things (as real users of your API will do). This can be as simple as
providing a few API Notebooks that walk developers through different tasks and
then turning them loose on those notebooks to create their own scenarios. Or it
can be as complex as creating a written checklist (as is typically used in user
experience testing).
If you take the more formal route, it’s important to recognize that you will have
both concrete sequentials (“I need it in writing, step by step”) and abstract
randoms (“I want to do this. Oh, and that.” “Hey look—a squirrel!”), and you’ll
want to empower them to utilize their unique personalities and learning/working
styles to provide you with a wide scope of feedback.
Your concrete sequential developers will already do things step by step, but your
abstract randoms are more likely not to go by the book—and that’s okay. Instead
of pushing them back onto the scripted testing process, encourage them to try
other things (by saying things like, “That’s a really interesting use case; I wonder
what would happen if...”) as again, in real life, this is exactly what developers will
do, and this will unlock issues that you never dreamed of.
The purpose of the prototyping process isn’t to validate that your API is ready for
production, but to uncover flaws so that you can make your API ready for
production. Ideally, in this stage you want to find 99 percent of the design flaws
so that your API stands as a solid foundation for future development while also
remaining developer-friendly. For that reason it’s important not to just test what
you’ve already tested in-house, but to let developers test every aspect of your
API. The more transparent your API is, and the more feedback you get from your
potential API users, the more likely you are to succeed in this process.
Remember, there’s nothing wrong with finding problems. At this point, that is the
point. Finding issues now lets you circle back to the design phase and fix them
You’ll know your API is ready for the real world when you send out the prototype
and, after being reviewed by a large group of potential API users (a minimum of
10; 20–50 is ideal), you get back only positive feedback.
Early on, APIs did this through the use of basic authorization, or asking the user
for their username and password, which was then forwarded to the API by the
software consuming it. This, however, creates a huge security risk for multiple
reasons. The first is that it gives the developers of the software utilizing your API
access to your users’ private information and accounts. Even if the developers
themselves are trustworthy, if their software is breached or hacked, usernames
and passwords would become exposed, letting the hacker maliciously use and
access your users’ information and accounts.
To help deal with this issue, Open Authentication—or OAuth—a token- based
authorization format was introduced. Unlike basic authorization, OAuth prevents
the API client from accessing the users’ information. Instead it relays the user to a
page on your server where they can enter their credentials, and then returns the
API client an access token for that user.
The huge benefit here is that the token may be deleted at any time in the event of
misuse, a security breach, or even if the user decides they no longer want that
service to have access to their account. Access tokens can also be used to restrict
permissions, letting the user decide what the application should be able to do
with their information/account.
Once again, Facebook is a great example. When you log in to Facebook, a popup
comes up telling you that the application wants to access your account and asking
you to log in with your Facebook credentials. Once this is done it tells you exactly
which permissions the application is requesting, and then lets you decide how it
should respond.
Notice that this is a page on Facebook’s server, not on Digg. This means that all
the information transmitted will be sent to Facebook, and Facebook will return an
identifying token back to Digg. In the event I was prompted to enter a
username/password, that information would also be sent to Facebook to
generate the appropriate token, keeping my information secure.
Now you may not need as complex as a login as Facebook or Twitter, but the
principles are the same. You want to make sure that your API keeps your users’
data (usernames and passwords) safe and secure, which means creating a layer of
separation between their information and the client. You should never request
login credentials through public APIs, as doing so makes the user’s information
vulnerable.
It’s also extremely important to ensure that each token is unique, based both on
the user and the application that it is associated with. Even when role-based
permissions are not required for the application, you still do not want a generic
access token for that user, since you want to give the user the ability to have
control over which applications have access to their account. This also provides an
accountability layer that allows you to use the access tokens to monitor what an
application is doing and watch out for malicious behaviors in the event that they
are hacked.
It’s also smart to add an expiration date to the token, although for most
applications this expiration date should be a number of days, not minutes. In the
case of sensitive data (credit cards, online banking, etc) it makes more sense to
have a very short time window during which the token can be used,, but for other
applications, doing so only inconveniences the user by requiring them to login
again and again. Most access tokens last between 30 and 90 days, but you should
decide the timeframe that works for you.
By having the tokens automatically expire, you are adding another layer of
security in the event that the user forgets to manually remove the token and are
also helping to limit the number of applications that have access to your users’
data. In the event that the user wants that application to be able to access their
account, they would simply reauthorize the app by logging in through the OAuth
panel again.
6.3 OAuth2
Your application then generates an access token based on both the user and the
application requesting access. In other words, the access token is tightly coupled
to both the user and the application, and is unique for this combination. However,
the access token can be independent of access permissions or scope, as you may
choose to let the user dictate (or change) these permissions from within your
application. By having the scope remain changeable or decoupled from the hash
of the token, users are able to have any changes they make regarding the scope
from within your application applied immediately without needing to delete or
regenerate a new token.
The access token created should also have a set expiration (again, usually days,
but this should depend on your API’s needs). This is an additional security
measure that helps protect a user’s information by requiring them to occasionally
reauthorize the application requesting access to act on their behalf. (This is often
as simple as clicking “reauthorize” or “login with....”)
As an added security measure, you can also restrict the access token to the
domain of the calling application.
Once the application receives the access token and client ID or identifier, it can
then store this information in its system, and the handshake is complete until the
access token either expires or is deleted by the user. At that time, should the user
choose to reauthorize the application, the handshake starts back at the beginning.
In a three-legged OAuth process, the flow is the same, with the exception of
having one more party involved (such as an OAuth service provider) who would
then act as the middle leg and provide your application with the information.
When implementing OAuth it’s important to understand that it is the only thing
preventing free access to your users’ accounts by the application— and any
malicious users who try to hijack or abuse it.
This means that you need to take a security-first approach when building out your
OAuth interface, and that before building anything on your own it is important to
Attackers may attempt to use brute force attacks against your OAuth solution or
utilize a man-in-the middle attack (pretending to be your server and sneaking into
the calling application’s system that way).
It’s also important to remember that your users’ information is only as secure as
their access tokens. I’ve already mentioned being sure to make all calls over SSL,
but you should also work with your API users to ensure they are properly and
securely storing access tokens.
The good news is that once you have an OAuth service, adding it to your API’s
definition in RAML, and making it accessible through the different tools available,
is extremely easy.
For OAuth 1, you would simply need to state that it is securedBy oauth_1_0 and
provide a requestTokenUri, an authorizationUri and the tokenCredentialsUri as
shown below in the Twitter RAML example:
securitySchemes:
- oauth_1_0:
type: OAuth 1.0
settings:
requestTokenUri:
https://api.twitter.com/oauth/request_token
authorizationUri:
https://api.twitter.com/oauth/authorize
tokenCredentialsUri:
https://api.twitter.com/oauth/access_token
securedBy: [ oauth_1_0 ]
securitySchemes:
- oauth_2_0:
type: OAuth 2.0
describedBy:
headers:
Authorization:
description: |
Used to send valid access
token
type: string
settings:
authorizationUri:
https://api.instagram.com/oauth/authorize
accessTokenUri:
https://api.instagram.com/oauth/access_token
authorizationGrants: [ code, token ]
scopes:
- basic
- comments
- relationships
- likes
securedBy: [ oauth_2_0 ]
You can learn more about using OAuth within RAML in the RAML spec under
“Security” at http://raml.org/spec.html#security . But thankfully the process of
implementing existing OAuth services into your RAML-based applications is far
simpler than actually creating them, and it makes it easy for your developers to
access real information when debugging or exploring your API.
Resources are the primary way the client interacts with your API, and as such it’s
important to carefully adhere to best practices when designing them, not only for
usability purposes, but to also ensure that your API is long-lived.
But what makes REST truly unique is that the resources are designed to be
decoupled from their actions, meaning that you can perform multiple actions on a
single resource. This means that you would be able to create, edit and delete
users all within the /users resource.
Decoupled Architecture
For this reason, resources are designed to be decoupled from your architecture
and tied not to specific methods or classes, but rather to generalized application
objects. This is a big change from SOAP, where the calls are tied to the class
methods, and RPC, where naming conventions tend to be tightly coupled to the
action you’re taking (getUsers).
By decoupling your resources from your architecture, you are ensuring that you
can change backend services or technologies without impacting how the client
interacts with your API while also providing flexibility in how the client interacts
with your API through the use of explicit methods or representations.
One of the best ways to ensure that your resources are decoupled is to think of
them as webpage URIs. For example, if sharing information about your company,
you would probably send the user to an “about” section on your website. This
might look something like “yourdomain.com/about” or
“yourdomain.com/company.”
In the same way, you can build out resources using that same navigational
principle. As mentioned above in the CRM example, users could be directed to the
/users resource, clients to the /clients resource, vendors to /vendors, etc.
Another way to be sure that you are enforcing this navigational style and avoiding
tight coupling of your resources to methods or actions is to utilize nouns for the
name of your resource. If you find yourself using verbs instead, such as
“/getUsers” or “/createVendor,” there’s a good chance you’re creating a tightly
coupled resource that is designed to only perform one or two actions.
Resources should also take advantage of the plural form. For example, /users
represents the full scope of the user object, allowing interaction with both a
collection (multiple records) and an item (a single user). This means that the only
time you would want to take advantage of a singular noun is if the only possible
action that can be taken is specific to a single item or entity. For example, if you
were creating a shopping cart API, you may elect to utilize the singular form
“/cart” for the resource rather than “/carts.” But again, in general, the plural form
will offer you the most flexibility and extendibility, as even when a feature is not
built into your application, or there are current restrictions (for example, only
letting companies have one location) that may change in the future. And the last
thing you want is to have both a plural and singular form of the resource. For
example, imagine users having to decide whether to use /location or /locations.
In other words, only use the singular format when there’s no possibility of the
resource having multiples—a scenario that is extremely rare. After all, even in
the /cart example, you may decide someday to give users multiple shopping carts
(saved carts, business carts, personal carts, etc.). So as you build out your
resources, you should be thinking not just about planning for now, but planning
for what could happen down the road as well.
This is very shortsighted, as we forget that only a few years ago XML was king.
And now, just a short while later, it is repeatedly mocked by “progressive
developers,” and the world is demanding JSON (and for good reason).
With the emergence of JSON, many enterprises were caught off guard, stuck
serving XML via SOAP APIs with no new way to meet their customers’ needs. It is
only now that we are seeing many enterprises in a position to provide RESTful
APIs that serve JSON to their customers.
The last thing we want to do is put ourselves in this position again. And with new
specs emerging every day, it is just a matter of time. For example, YAML (Yet
Another Markup Language) is already gaining popularity, and while it may not be
the primary choice for most developers today, that doesn’t mean some of your
most influential clients won’t ask you for it.
By preparing for these types of scenarios, you also put yourself in a position to
meet all of your clients’ needs and provide an extremely flexible and usable API.
By letting developers decide which type of content-type they are utilizing, you let
them quickly and easily implement your API in their current architecture with
formats they are comfortable with. Surprisingly, along with functionality and
flexibility, this is something that many developers are looking for.
Today, when you browse the Web your client (browser) sends a content- type
header to the server with each data-sending request, telling the server which type
of data it is receiving from the client.
Once the server receives this content-type, it not only knows what data it has
received but, if it is a recognized format, how to process it as well. This same
principle can be applied to your API, letting you know which type of data your
client is sending and how to consume it. It also tells you which data format they
are working with.
To go a step further, you can also take a look at the Accept header to see which
type of data they are expecting in return. Hypothetically, when you build out your
architecture your client should be able to send you XML by declaring it in the
content-type and expect JSON in return by declaring a desired JSON response in
the Accept header.
This creates the most flexibility for your API and lets it act as a mediator when
used in conjunction with other APIs. However, it also provides a lot of opportunity
for confusion. Because your API is designed to have a uniform interface, I would
recommend not taking advantage of this wonderful header, but rather relying on
the content-type to determine which data format they are working with, and then
passing back that same format.
XML
Defined by the W3C, XML or the Extensible Markup Language was designed to
present a format that was both machine- and human-readable. Some of the more
common formats of XML include RSS (commonly used for feeds), Atom, XHTML
and, of course, SOAP.
XML also encourages the use of strict schemas and was the choice format for
many enterprises, causing the move to JSON to be more challenging.
However, while descriptive, XML takes up more bandwidth than JSON, and while
commonly used, does not have the same broad language support. While there are
<books>
<book>
<title>This is the Title</title>
<author>Imag E. Nary</author>
<description>
<![CDATA[Once upon a time there was a
great book]]
</description>
<price>12.99</price>
</book>
<book>
<title>Another Book</title>
<author>Imag E. Nary</author>
<description>
<![CDATA[This is the sequel to my other
book]]
</description>
<price>15.99</price>
</book>
</books>
JSON
},
{
"title" : "Another Book",
"author" : "Imag E. Nary ",
"description" : "This is the sequel to my other
book",
"price" : "15.99"
}
]}
You can also define strict JSON through the use of JSON Schemas, although these
are not as commonly used.
YAML
Books:
-
title: This is the Title
author: Imag E. Nary
description: Once upon a time there was a great
book
price: 12.99
Versioning
I cannot stress enough that when it comes to building an API, your goal should be
to create one that is so amazing that you can avoid versioning altogether.
However, as hard as we try, there is a good chance (at least with today’s
technology) that at some point in time we will find ourselves having to version our
API for one reason or another.
There have been several suggested methods for versioning, but the first thing to
remember is that, fundamentally, versioning is a bad thing. It’s important to
understand that the lower the version number, the better. In the desktop
software world, we push for that next number, creating Version 1, 2 and—in
some cases—skipping numbers altogether to make the product sound more
advanced than it is! But in the API world, the sign of a success is not having to
version.
This means you should consider avoiding minor versioning altogether, since it
serves no real purpose in an API. Any feature changes should be made
You could also make the argument that minor versioning lets developers quickly
know there’s been a change to your API—an argument that makes sense on the
surface. Of course the counter argument is that you may have developers who
misunderstand minor versioning and instead rush to try and upgrade their system
to the new API without needing any of the new features (or while they’re already
taking advantage of them without realizing it). This may result in unnecessary
support calls and emails, as well as confusion (“Can I do this in 1.1 or only in 1.2?
And how do I access version 1.1 instead of 1.2?”).
The other counterpoint to this argument is that if you build a strong developer
community, developers will talk about new features (although not everyone will
be involved in the community), and if you utilize registration to gain an API key
(spoiler alert— you should) you can keep in touch with all of your developers via
email. (They may not be read, but then again, minor versioning in the code might
not be seen either.)
So with this in mind, let’s take a look at the three different mainstream schools of
thought regarding how to version your API.
In the URI
This method includes the version number in the base URI used for API calls,
making developers explicitly call the API version that they want. For example, to
access Version 1 of the API, one would use api.domain.com/v1/resource to access
the API, whereas for Version 2 they would call api.domain.com/v2/resource. This
means that when reading documentation and implementing your API, developers
will be forced to look at the version number, since they may not notice it when
briefly looking over code samples unless it is predominately called out. This makes
this method preferable for APIs that are catering to newer developers.
One argument against the URI method is that it doesn’t allow the API to be
hypermedia driven, or that the content-type method makes this easier. This is
partially because REST is designed to be hypermedia driven and not tightly
However, even with the content-type, we currently have no good way to know
what the client supports, So when calling Version 2 from a client that only
supports certain Version 2 segments, we’re still likely to get back Version 2 links in
the hypertext response, causing the client application to fail.
One advantage of URI versioning is that it tells developers which version they are
using, and is easier for newer developers to implement. It also helps prevent
confusion if developers forget to append the version on top of the content-type
version type (which if using this method should throw an error to prevent
ambiguity).
Of course, it’s also very easy for the base URI to become hidden somewhere in an
include, meaning that developers may not explicitly know which version of the
API they are using without having to dig into their own code. Just the same, the
other methods run this same risk depending on how the client’s application is
architected.
This method is arguably cleaner and far less coupled than the URI method. With
this method, developers would append the version to the content-type, for
example:
Content-type: application/json+v1
This also raises questions regarding a uniform interface, as you are transitioning
the user between two incompatible versions of your API to accomplish different
things. On the other hand, this may help developers transition from one version
to another, as they can do it over time instead of all at once. Just the same, I can’t
say it is recommended, as I believe that depending on business needs and
implementation, it may cause far more harm than good .
Another issue with the content-type is that developers have to know that they
need to call this out. This means that you have to not only have clear
documentation, but also validation regarding whether or not this information is
provided.
You must also have a central routing mechanism between your two APIs, which
presents a possible domain challenge. Since a key reason you are versioning is
that your current version no longer meets your needs, you are probably not just
rebuilding one section of the API, but rather its very foundation. This may make
taking advantage of the content-type method of versioning far more complex
than having multiple, but explicit, URIs.
Perhaps the biggest benefit of the content-type method is if you have two
different versions of your application (some customers are on V1, some on V2)
and you want to provide an API that can accommodate both. In that case you’re
not really versioning your API, but rather letting customers tell you which version
of your application they’re on so you can provide them with the appropriate data
structures and links. This is an area where the content-type method absolutely
excels.
In a Custom Header
The custom header is very similar to the content-type header, with the exception
that those using this method do not believe that the version belongs in the
content-type header, and instead makes sense in a custom header, such as one
called “Version.”
Version: 1
This helps prevent the whole “Accept” conundrum, but it also runs into the same
issues of the content-type header as well as forces developers to veer into the
documentation to understand what your custom header is, eliminating any
chance of standardization (unless everyone decides on a standard custom header,
such as “version”).
This also opens up confusion, as developers may ask how to send a version
number through a header and get multiple answers ranging from other API’s
custom headers to using the content-type header.
For that reason, I cannot recommend using the custom header. And while I
personally agree that the content-type header may not be the best place either, I
think using a pre-existing, standard header is better than creating an offshoot—at
least until a new header is established and standardized for this purpose.
Utilizing CRUD
CRUD stands for Create, Read, Update and Delete and is an acronym commonly
used when referring to database actions. Because databases are data-driven, like
the Web, we can apply these same principles to our API and how our clients will
interact with the methods.
This means that we will be utilizing specific methods when we want to create new
objects within the resource, specific methods for when we want to update
objects, a specific method for reading data and a specific resource for deleting
objects.
However, before we can apply CRUD to our API, we must first understand the
difference between interacting with items verses collections, as well as how each
method affects each one. This is because multiple methods can be used for both
creating and updating, but each method should only be used in specific cases.
For example, when dealing with a collection you have to be very careful when
allowing updates or deletes, as an update on a collection will modify every record
within it, and likewise a delete will erase every single record. This means that if a
For this reason, if you plan to let your users do mass edits/ deletes, it’s always a
good idea to require an additional token in the body to ensure that they are doing
exactly what they are intending. Remember, REST is stateless, so you should not
force them to make multiple calls, as you have no way of carrying over state on
the server side.
For single, pre-existing records, it makes perfect sense to let a user edit or even
delete the record. However it doesn’t make much sense to let them create a new
record from within a specific record. Instead, creation should be reserved for use
on the collection.
While this can be a little confusing at first, with proper documentation and the
use of the OPTIONS method, your API users will be able to quickly identify which
methods are available to them. As they work with your API, this will eventually
become second nature as long as you remain consistent in their usage.
HTTP Methods
You’re probably already quite familiar with HTTP methods, or HTTP action verbs.
In fact, every time you browse the Web with your browser, you are taking
advantage of the different methods—GET when you are requesting a website and
POST when you are submitting a form.
Each HTTP method is designed to tell the server what type of action you want to
take, ranging from requesting data, to creating data, to modifying data, to
deleting data, to finding out which method options are available for that given
collection or item.
Of the six methods we’re going to look at, five can be mapped back to CRUD.
POST is traditionally used to create a new object within a collection, GET is used
to request data in a read format, PUT and PATCH are used primarily for editing
existing data and DELETE is used to delete an object.
However, there is some crossover among the different methods. For example,
while POST is predominately used to create objects, PUT can also be used to
GET
The GET HTTP Method is designed explicitly for getting data back from a resource.
This is the most commonly used HTTP Method when making calls to a webpage,
as you are getting back the result set from the server without manipulating it in
any way.
In general, a GET response returns a status code 200 (or ok) unless an error
occurs, and relies on a querystring (domain.com/?page=1) to pass data to the
server.
The GET method should be used any time the client wants to retrieve information
from the server without manipulating that data first.
POST
One of the most versatile HTTP Methods, POST was designed to create or
manipulate data and is used commonly in Web forms. Unlike GET, POST relies on
body or form data being transmitted and not on the query string. As such, you
While extremely versatile, and used across the Web to perform many different
functions due to its common acceptance across multiple servers, because we
need an explicit way to define what type of action should be taken within a
resource, it is best to only use the POST method for the creation of an item within
a collection or a result set (as with a multi-filtered search).
When creating an object (the function for which POST should predominately be
used), you will want to return a 201 status code, or Created, as well as the URI of
the created object for the client to reference.
PUT
Less well known is the PUT Method, which is designed to update a resource
(although it can create the resource if it doesn’t exist).
Traditionally, PUT is used to explicitly edit an item, overwriting the object with the
incoming object. When using the PUT method, most developers are not expecting
an object to be created if it doesn’t exist, so taking advantage of this clause within
this method should be done with extreme care to ensure that developers know
exactly how your API uses it. It’s also important that your usage of PUT remains
consistent across all resources. (If it creates an object on one resource, it should
do the same on all the others.)
If you elect to utilize PUT to create an item that doesn’t exist (for example, calling
“/users/1” would create a user with the ID of 1), it is important to return the
It’s also important to understand that you cannot use PUT to create within the
resource itself. For example, trying a PUT on /users without explicitly stating the
user ID would be a violation of the standardized specification for this spec.
For this reason I would highly recommend not creating an object with PUT, but
rather returning an error informing the client that the object does not exist and
letting them opt to create it using a POST if that was indeed their intention. In this
case, your request would simply return status code 200 (okay) if the data was
successfully modified, or 304 if the data was the same in the call as it was on the
server.
It’s also important to explain to your users that PUT doesn’t just overwrite the
object data that they submit, but all of the object data. For example, if I have a
user with the following structure:
{
"firstName" : "Mike",
"lastName" : "Stowe",
"city" : "San Francisco",
"state" : "CA"
}
{"city" : "Oakland"}
The object on the server would be updated as such, reflecting a complete override:
{
"firstName" : "",
Of course, this is traditionally not what the user wants to do, but is the effect of
PUT when used in this case. What the client should do when needing to patch a
portion of the object is to make that same request using PATCH.
PATCH
In HTTP, PATCH is designed to update only the object properties that have been
provided in the call while leaving the other object properties intact.
Using the same example we did for PUT, with PATCH we would see the following
request:
{"city" : "Oakland"}
Which would return the following data result set from the server:
{
"firstName" : "Mike",
"lastName" : "Stowe",
"city" : "Oakland",
"state" : "CA"
}
Like PUT, a PATCH request would return either a 200 for a successful update or a
304 if the data submitted matched the data already on record— meaning nothing
had changed.
The DELETE Method is fairly straight forward, but also one of the most dangerous
methods out there. Like the PUT and PATCH methods, accidental use of the
DELETE method can wreck havoc across the server. For this reason, like PUT and
PATCH, use of DELETE on a collection (or the main gateway of the resource:
/users) should be disallowed or greatly limited.
When using a DELETE, you will most likely want to return one of three status
codes. In the event that the item (or collection) has been deleted and you are
returning a content or body response declaring such, you would utilize status
code 200. In the event that the item has been deleted and there is nothing to be
returned back in the body, you would use status code 204. A third status code,
202, may be used if the server has accepted the request and queued the item for
deletion but it has not yet been erased.
OPTIONS
Unlike the other HTTP Methods we’ve talked about, OPTIONS is not mappable to
CRUD, as it is not designed to interact with the data. Instead, OPTIONS is designed
Because you may choose not to make every method available on each call they
may make (for example not allowing DELETE on a collection, but allowing it on an
item), the OPTIONS method provides an easy way for the client to query the
server to obtain a quick list of the methods it is allowed to use for that collection
or item.
When responding to the OPTIONS method, you should return back either a 200 (if
providing additional information in the body) or a 204 (if not providing any data
outside of the header fields) unless, ironically, you choose not to implement the
OPTIONS method, which would result in a 405 (Method Not Allowed) error.
However, given that the purpose of the OPTIONS method is to declare which
methods are available for use, I would highly recommend implementing it.
Since APIs are designed to be consumed, it is important to make sure that the
client, or consumer, is able to quickly implement your API and understand what is
happening. Unfortunately, many APIs make implementation extremely difficult,
defeating their very purpose. As you build out your API you want to ensure that
you not only provide informational documentation to help your developers
integrate/ debug connections, but also return back relevant data whenever a user
makes a call—especially a call that fails.
Using the current HTTP status codes prevents us from having to create a new
system that developers must learn, and creates a standard of responses across
multiple APIs, letting developers easily integrate your API with others while using
the same checks.
As you can see, someone utilizing the Spring Framework at this time while making
a call to Twitter might be confused by what 420 really meant, and whether the
method was not allowed (405) or there was a server error (500) instead of
realizing they simply made too many calls. Imagine how much time and energy
that confusion could cause them in debugging and trying to fix an application that
is already working perfectly.
It’s also important to use status codes because the behavior of the server may be
different from the expected behavior of the client. For example, if a client does a
PUT on an item and the item doesn’t exist, per the RFC the item/object can then
be created on the server—but not all APIs adhere to his idea. As such, by
returning a 201 (Created) instead of a 200 (OK), the client knows that the item did
not previously exist and can choose to delete it (if it was accidentally created) or
update their system with the data to keep everything in sync. Likewise, a 304
response would inform them that nothing was modified (maybe the data was
identical), and a 400 would inform them that it was a bad request that could not
be handled by the server (in the event where you elect not to create the item).
Some of the more common HTTP status codes you may run into, and should
consider using, include the ones described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
Handling Errors
Unfortunately, no matter how hard you try and how carefully you document your
API, errors will always be a reality. Developers will skip over documentation,
misunderstand it or simply discover that calls which were previously valid no
longer work.
Generic error messages tell developers that “something went wrong,” but fails to
tell them exactly what went wrong and how to fix it. This means that developers
must spend hours debugging their code—and your API—in hopes of finding the
answer. Eventually they’ll either just give up (often finding a competitor’s API
instead) or contact support, requiring them to go through everything to track
down oftentimes veiled and abstruse issues.
In the end, it costs you far more to have generic error messages than it does to
provide descriptive error messages that alleviate developer frustration (“Oh, I
know exactly what happened”); reduce development, debug, and integration
time; and reduce support requirements. And by having descriptive error
messages, when support is needed, they will have a good idea of what the issue is
and where to look, saving you resources in that department as well (and keeping
your API support team happy).
Thankfully, despite their fairly rare usage, descriptive error messaging is nothing
new, and there are several different formats out there that already incorporate
JSON API
JSON API was created to serve as a way of returning back JSON-based response
metadata, including hypertext links (which we’ll discuss in Chapter 12), as well as
handling error bodies.
Rather than returning back just a single error, JSON API, as well as the other error
specs we’ll look at, lets you return back multiple errors in a single response, each
containing a unique identifier, an error code to the correlating HTTP status, a brief
title, a more in-depth message describing the error, and a link to retrieve more
information.
Property Usage
error.errors An array containing all of the errors that
occurred (For example, if the form
failed because of missing data, you
could list out which fields are missing
here with an error message for each of
them.)
error.errors[].id A unique identifier for the specific
instance of the error (Please note that
this is not the status code or an
application-specific identifier.)
error.errors[].href A URL that the developer can use to
learn more about the issue (for
example, a link to documentation on
the issue)
error.errors[].status The HTTP status code related to this
specific error, if applicable
error.errors[].code An application-specific code that
identifies the error for logging or
support purposes
error.errors[].title A short, human-readable message that
briefly describes the error
error.errors[].detail A more in-depth, human-readable
description of the error and how to
resolve it
Usability is Key
Remember, one of the keys to your API’s success is usability. It’s easy to return a
“Permission Denied” error to a resource, but it is far more helpful if you return
“Invalid Access Token” with additional resources telling the developer exactly
what the problem is.
Once you have designed and built your API, one of the most crucial aspects is
protecting your system’s architecture and your users’ data, and scaling your API in
order to provide downtime and meet your clients demands.
The easiest and safest way to do this is by implementing an API manager, such as
MuleSoft’s API Gateway. The API manager can then handle API access
(authentication/provisioning), throttling and rate limiting, setting up and handling
SLA tiers and—of course—security.
Of course, you can build your own API manager and host it on a cloud service such
as AWS, but you have to take into consideration both the magnitude and the
importance of what you’re building. Because an API manager is designed to
provide both scalability and security, you’ll need to make sure you have system
architects who excel in both, as one mistake can cost hundreds of thousands—if
not millions—of dollars.
And like any large-scale system architecture, trying to design your own API
manager will most likely prove costly—usually several times the cost of using a
third-party API manager when all is said and done.
API Access
Controlling access is crucial to the scale and security of your API. By requiring
users to create accounts and obtain API keys or keys for their application, you
retain control. The API key acts as an identifier, letting you know who is accessing
your API, for what, how many times, and even what they are doing with that
access.
By having an API key, you can monitor these behaviors to isolate malicious users
or potential partners who may need a special API tier. If you choose to monetize
your API, monitoring API access will also allow you to identify clients who may
want or need to upgrade to a higher level.
Typically, an API key is generated when a user signs up through your developer
portal and then adds an application that they will be integrating with your API.
A good API manager goes beyond just the basic management of API keys, and will
also help you implement security for restricting access to user information, such
as OAuth2, LDAP, and/ or PingFederate. This lets you take advantage of systems
you are already utilizing, or the flexibility of using a third party service to handle
OAuth if you choose not to build it yourself (remember chapter 6):
Throttling
Throttling and rate limiting allow you to prevent abuse of your API, and ensure
that your API withstands large numbers of calls (including DoS and accidental
Rate limiting lets you set a hard number for how many calls the client may make
in a specific time frame. Essentially, if a client is making too many calls, you can
slow down the responses or cut the client off to prevent the system from being
overrun or disrupting your other users.
SLA Tiers
SLA tiers, or Service Level Agreements let you set up different rules for different
groups of users. For example, you may have your own mobile apps, premium
partners, paid API users, and standard/free users. You may want to limit the
access of each of these groups to ensure the highest quality engagement for your
users, while also helping prevent loops by inexperienced developers testing out
your API. For example, you can ensure premium partners and your mobile apps
have priority access to the API with the ability to make thousands of calls per
second, while the standard API user may only need four calls per second. This
ensures that the applications needing the most access can quickly obtain it
without any downtime, while your standard users can also rely on your API
without having to worry about someone abusing the system, whether
accidentally or on purpose.
Once you have set up your SLA tiers, you should be able to assign applications to
that tier.
Analytics
Another valuable tool your API manager should provide you is analytics, letting
you quickly see which of your APIs are the most popular and where your calls are
coming from. These analytics can help you identify which types of devices (OS)
Along with identifying your API’s usage trends, as well as being able to monitor
API uptime/spikes/response times (especially in regards to your own server
architecture), analytics also help you prove the business use case for your API, as
you are able to show both its popularity and how it is being used.
These metrics are especially key when making business decisions, reporting to
business owners/stakeholders, and designing a Developer Relations/Evangelism
program (as often times API Evangelist’s performances are based on the number
of API keys created and the number of calls to an API).
Every day, new threats and vulnerabilities are created, and every day, companies
find themselves racing against the clock to patch them. Thankfully, while an API
manager doesn’t eliminate all threats, it can help protect you against some of the
most common ones. And when used as a proxy, it can prevent malicious attacks
from hitting your architecture.
It’s important to understand that when it comes to security, you can pay a little
up front now, or a lot later. After all, according to Stormpath, in 2013 the average
cost of a personal information breach was $5.5 million. When Sony’s PlayStation
network was hacked, exposing 77 million records, the estimated cost to Sony was
$171 million for insurance, customer support, rebuilding user management and
security systems.
This is in part why you should seriously consider using a pre-established, tested
API manager instead of trying to build your own, because not only do you have
the development costs, but also the risks that go along with it. When it comes to
security, if you don’t have the expertise in building these types of systems, it’s
always best to let those with expertise do it for you.
CORS lets you enable these cross-domain calls, while also letting you specify host
limitations (for example, only calls from x-host will be allowed; all other hosts will
With that said, simply restricting hosts does not ensure that your application will
be safe from JavaScript attacks, as clients can easily manipulate the JavaScript
code themselves using freely available browser- based tools (such as Firebug or
Inspector).
Being client-side, CORS can also present other security risks. For example, if a
developer chooses to control your API via client-based JavaScript instead of using
a server-side language, they may expose their API key, access tokens and other
secret information.
As such, it’s important to understand how users will be utilizing CORS to access
your API to ensure that secret or detrimental information is not leaked.
Keep in mind that every API manager operates differently. Some API managers
may let you set up a list of allowed hosts, while others may issue a blanket
statement allowing every host to make cross-origin requests.
As such, the golden rule for CORS, unless you have a specific-use case, is to leave
it disabled until you have a valid, well-thought-out reason for enabling it.
With the growth and focus of SOA in enterprise architectures, hackers worked to
find a new way to exploit vulnerabilities, often times by injecting malicious code
into the data being passed through the system. In the case of XML services, users
with malicious intent could build the XML data in such a way as to exhaust server
memory, hijack resources, brute force passwords, perform data tampering, inject
SQL, or even embed malicious files or code.
In order to exhaust server memory, these attackers might create large and
recursive payloads, something that you can help prevent by limiting the length of
the XML object in your API manager, as well as how deep into the levels the XML
may go.
Along with memory attacks, malicious hackers may also try to push through
malicious XPath/XSLT or SQL injections in attempt to get the API layer to pass
along more details than desired to services or the database.
Malicious attacks may also include embedding system commands in the XML
Object by using CDATA or including malicious files within the XML payload.
Of course, there are several more XML-based attacks that can be utilized to wreak
havoc on an API and the underlying architecture, which is why having XML threat
protection in place is key to ensuring the safety of your API, your application and
your users. Again, since security is built in layers, while the API manager can help
prevent some of these threats, monitor for malicious code and limit the size of
the XML payload or the depth it can travel, you will still want to be meticulous in
building your services architecture to ensure that you are eliminating threats like
SQL and code injection on the off chance they are missed by your API gateway.
Similar to XML, JSON falls victim to several of the same malicious attacks.
Attackers can easily bloat the JSON and add recursive levels to tie up memory, as
well as inject malicious code or SQL that they anticipate your application will run.
As with XML threat protection, you will want to limit the size and depth of the
JSON payload, as well as constantly be on the watch for security risks that might
make it through the API gateway including SQL injections and malicious code the
user wants you to evaluate.
MuleSoft’s API Manager lets you set up custom policies to help prevent XML and
JSON Threat Protections:
Since the goal of any API is developer implementation, it’s vital not to forget one
of your most important assets—one that you should be referencing both in error
messages and possibly even in your hypermedia responses: documentation.
The challenge is that not only should your documentation be consistent in its
appearance, but also consistent with the functionality of your API and in sync with
the latest changes. Your documentation should also be easily understood and
written for developers (typically by an experienced documentation team).
Until recently, solutions for documentation have included expensive third- party
systems, the use of the existing CMS (Content Management System), or even
dedicated CMS’s based on open source software such as Drupal/WordPress.
However, with the expansion of open specs such as RAML—and the communities
surrounding them—documentation has become incredibly easier. Instead of
However, before we jump into the different documentation tools, it’s important
to understand what makes for good documentation.
As such, good documentation should be clear and concise, but also visual,
providing the following:
types, special formatting, rules and whether or not they are required
(e.g. Curl with PHP, as well as examples for Java, .Net, Ruby, etc.)
SDK examples (if SDKs are provided) showing how to access the
Notebook)