ArganoMS3 | Cloud Integration Solutions

  • About
    • Why Us
    • Leadership
    • Our Core Values
    • Clients
    • ArganoMS3 PHILIPPINES
  • Services & Expertise
    • API & INTEGRATION
      • Tavros
      • KONG API
      • MuleSoft
      • APIGEE
      • REDHAT FUSE / CAMEL
    • STRATEGY & OPTIMIZATION
      • DEVOPS
      • 24x7x365 Ops Support
    • CLOUD SERVICES
      • AMAZON AWS
      • SALESFORCE
    • ARTIFICIAL INTELLIGENCE
      • RPA
      • BIG DATA
      • IoT
  • Partners
    • MuleSoft
    • AWS
    • UiPath
    • Kong
    • Apigee
    • Red Hat
  • Resources
    • ArganoMS3 Is Talkin’ Nerdy
    • Case Study
    • MUnit Whitepaper
    • MuleSoft Runtime Whitepaper
    • API-Led Architecture Whitepaper
  • Careers
  • Blog
    • Archive
  • Contact

API Quality Control

October 19, 2018

Written By: Mark Norton

So, you finally got your API design put together, and the guys in IT have implemented it. They have done a bit of smoke testing and say it’s pretty much working. So, release it to production? Well, maybe. It depends on your quality control strategy. One approach is to release early, often and to use a DevOps approach to deploy the latest version of the API continuously. If your development team iterates quickly, any reported problems will be fixed within days, perhaps even hours.

Many companies, especially large established ones are not comfortable with this approach. They would instead take a more methodical approach to API quality control by running a candidate build through a test phase before releasing to production. That’s what this blog explores. It should also be noted that it is not an either/or choice. You can mix some of the suggestions here into you CI/CD approach, especially since some of them can be automated.

The Importance of Good Requirements

Before you can make a methodical test plan for your candidate APIs, you MUST understand what the API is supposed to do, in detail. The only way to do this is to document all API requirements carefully. These requirements should specifiy:

  • The resources being managed
  • Data structures associated with the resource: request and response bodies
  • How to identify the resource: search parameters, paging, etc.
  • What errors can happen and how will they be reported
  • Logging and analytics support
  • How to handle security: both authentication and authorization
  • Response time, throughput, volume requirements
  • Side effects to updates (if any)

You cannot test for something if you don’t know what it’s supposed to do. All optional elements need to be specified. All constraints should include factors such as cardinality, null-ability, number ranges, enum values, etc. We understand that producing documentation can be a pain, but a little time spent here, creating necessary documentation, will save you from wasting time later.

Write a Test Plan

It’s a good idea to write down how you approach testing a release candidate. These notes don’t need to be a formal document, but if you have test engineers doing the actual testing, it’s good to have a checklist to track the progress before signing off on a release to production.

Consider the following kinds of testing:

  • Smoke testing done by developers
  • Unit testing
  • Functional testing
  • User acceptance testing
  • Performance testing
  • Security Testing

Depending on how rigorous you want to be about testing, your test plan could include some or all of these in your checklist.

Developer Quality

Quality cannot be “tested” into the software. Quality API development starts with programmers using good development practices. Often these practices are described in governance standards and policies, but it also relies on the programmer having a solid understanding of the tools in use, how to write good software, and how to avoid simple mistakes (copy/paste errors, for example). During the coding process, the programmer should constantly check to see if the code written works. This “smoke” testing is not as comprehensive as later testing phases but can catch common problems (compiler errors, malformed data structures, lack of error handling, etc.).

Execution Quality

All APIs should run without errors or exceptions. Often, triggering certain input conditions can be difficult. As such, we test for execution quality inside the API application. APIs are usually broken down into flows, sub-flows, batch tasks, etc. These code units can be evaluated using a Unit Test.

Unit tests set up state data and conditions that would be present when executing a code unit. This set up is called mocking (as it mocks real situations). Mocking requires very close and careful attention to ensure the input state is as realistic as possible. The code unit invokes with the mock inputs to assert the expected output results.

Most development environments support the automatic execution of unit tests at build time. Maven allows the run of unit tests to be specified. If the test fails, the build fails. Unit tests are beneficial when combined with DevOps build scripts.

Functional Quality

Requirements gathered early in API design specify how the API is intended to function. Each request must return a detailed response, and required parameters must be present. Optional parameters, not included, should not cause failure or error. Added optional parameters should not conflict with required or other optional parameters.

Function testing measures the ability of an API to deliver the functionality specified by requirements and a design document. You would complete a function test outside of the API by making calls to it in a controlled environment (QA) with a set of structured requests, including mocked payloads, built, and the results implemented. Some development environments (such as API Pro), support the generation of functional tests, thus reducing development time.

Integrated Quality

Functional tests can do a good job of testing the expected operation of an API, but this doesn’t adequately evaluate how the API will work in the context of the business application. If resources are available, then test the APIs in the context of the intended business application. Testing can be done either manually using a test script or automated using a tool such as Selenium. The success of User Acceptance Tests is the basis for delivery sign-off to the end user. Since business applications are often highly dependent on underlying APIs, UAT’s may not be possible in all cases or may be done late in the development process.

Performance Quality

Proper requirements will specify acceptable response times, process volumes, etc. If conditions are not defined, completing some performance testing is a recommended strategy to help determine how an API will perform under a simulated load. Various tools are available that support performance measurements. Some require embedded extensions to the runtime environment to capture code use coverage and execution times (JMeter, for example). There are also load testing services that will simulate many uses, request volumes, request rates, etc.

Security Quality

Do not ignore security when testing the quality of an API. End users are usually challenged to authenticate themselves. Once verified, a session token of some kind is created and used during subsequent operations. Typically, this data passes through an HTTP header, a specified part of the API design (in the RAML). Inter-application security also needs to be considered (front-end application communicating to the back-end API application), often handled by a client id and secret. Security testing requires the verification that all of these tokens are present and correct.

Other kinds of security testing can include:

  • Penetration tests
  • Data mocking
  • Code and SQL injections
  • Clear text credentials
  • (many others)

As with performance testing, there are third-party services available that can help with security testing.

Summary

Having a good set of requirements and having a solid test plan can go a long way towards establishing a quality assurance process. Some of the testings can be automated, reducing costs and speeding time to release. Give some thought to how to blend these techniques into your current QA strategy.

About The Author

Mark Norton is a senior integration specialist at ArganoMS3 with over 15 years of API design and implementation experience in support of higher education, retail, healthcare, telecommunications, IT, and other vertical markets. As a senior architect, Mark guides the future direction of clients by conducting architect design reviews and consulting on tactical directions. He has experience with MuleSoft, RedHat JBoss, and Java Servlet API Platforms.

More Posts by this Author

  • DXtera Integration

Filed Under: APIs, Integration Tagged With: Best Practices, Implementation, Quality

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

FUTURE PROOF SOFTWARE SOLUTIONS
ArganoMS³ enables organizations to meet today’s software, integration, cloud, and data-related challenges with confidence and ease.

About

  • Why Us
  • Leadership
  • Team
  • Clients
  • We're hiring

Solutions

  • API & Integration
  • Strategy and Optimization
  • Cloud Services
  • Artificial Intelligence

Partners

  • Apigee
  • AWS
  • Kong
  • MuleSoft
  • Red Hat
  • Salesforce
  • UiPath

Popular Links

  • Contact Us
  • Blog
  • White Papers
  • Case Study
COPYRIGHT © 2022 ⬤ ArganoMS³ MOUNTAIN STATE SOFTWARE SOLUTIONS

Copyright © 2023 · MS3 Mountain State Software Solutions · Log in