ArganoMS3 | Cloud Integration Solutions

  • About
    • Why Us
    • Leadership
    • Our Core Values
    • Clients
    • ArganoMS3 PHILIPPINES
  • Services & Expertise
    • API & INTEGRATION
      • Tavros
      • KONG API
      • MuleSoft
      • APIGEE
      • REDHAT FUSE / CAMEL
    • STRATEGY & OPTIMIZATION
      • DEVOPS
      • 24x7x365 Ops Support
    • CLOUD SERVICES
      • AMAZON AWS
      • SALESFORCE
    • ARTIFICIAL INTELLIGENCE
      • RPA
      • BIG DATA
      • IoT
  • Partners
    • MuleSoft
    • AWS
    • UiPath
    • Kong
    • Apigee
    • Red Hat
  • Resources
    • ArganoMS3 Is Talkin’ Nerdy
    • Case Study
    • MUnit Whitepaper
    • MuleSoft Runtime Whitepaper
    • API-Led Architecture Whitepaper
  • Careers
  • Blog
    • Archive
  • Contact

By Mark Norton

Documentation is something most programmers would willingly avoid given the smallest chance. Most see it as a distraction from what they do, which is writing code. If asked directly, however, most programmers will admit that documentation is important for the client, team members, those who follow and even (whisper it) themselves. With a little bit of extra effort, MuleSoft developers can take advantage of the built-in documentation generations in AnyPoint Studio.

This concept of documentation embedded in code has been around for quite some time. Since MuleSoft uses Java extensively in developing their own products, most MuleSoft developers are familiar with JavaDoc, a documentation generation utility introduced in 1995. JavaDoc generates high quality documentation for Java interfaces, classes and objects formatted as a set of HTML pages,

Examples

MuleSoft AnyPoint Studio provides documentation generation in a similar manner. To illustrate how this works a simple application was created with three elements: an HTTP listener, a database request, and a data formatter.

Adding Additional Information

Additional information can added using the Notes tab in flow elements.  Here, additional information is added to the database query:

This is very similar to the annotations supported by JavaDoc that allows additional information to be included in the code to further clarify the documentation generated later.

The “doc:description” parameter is only include if one is specified.

Documentation Generation

Once code is complete and ready for document generation, click on the Export Studio Documentation icon:

A dialog box is displayed to select a folder to save results in:

Confirm the folder creation (it will overwrite a previous version, if it exists):

Viewing the Generated Documentation

Results are saved in the folder as a set of HTML files, plus graphics.

Open the “index.html” file in a browser. You’ll see a formatted view of the generated documentation:

All flows are listed on the left. There is only a single flow in this example, but if there were more, they would be included here as well.

A graphical representation of the example flow is shown, the same one you would see in AP Studio. Each processing element is then broken out below for further detail. Two are shown above.

The first is an HTTP listener inbound endpoint that listens on port 8081 (defined by the HTTP configuration) and the path ‘/persons”.

The second is a database operation that fetches records based on a simple query. The element is labeled as part of the “person_sapiFlow”, along with the label specified by the developer, “All Persons” in this case. Note the text at the bottom of the selection that reads “Fields for the person table include …”. This information was added by the developer using the Notes tab of the element in AP Studio:
**Note that the final processing element, “json:object-to-json-transformer” is not included in the documentation, Simple processing elements are not included to improve readability.

Conclusion

Overall, the results are clean and professional looking. While all of the important information is included, there is a room for improvement. For example, error handling does not have any documentation generated, though it can be viewing the whole flow XML. I would love to see additional resources included, even if just as a list. These could include POM files, schemas, API definitions and property files, just to name a few. Finally, there should be an option to generate specifically for PDF so that the file can be printed and/or distributed. In the current program, developers can expand fields manually and print the page as a PDF this way, but all these extra steps shouldn’t be necessary.

While not perfect, documentation generated by AP Studio is a quick and easy solution for when documentation is required by a client. A little descriptive work by the developer results in a doc set that describes the application, how it is broken down, and (roughly) how it works.

About The Author

Mark Norton is a senior integration specialist at ArganoMS3 with over 15 years of API design and implementation experience in support of higher education, retail, healthcare, telecommunications, IT, and other vertical markets. As a senior architect, Mark guides the future direction of clients by conducting architect design reviews and consulting on tactical directions. He has experience with MuleSoft, RedHat JBoss, and Java Servlet API Platforms.

More Posts by this Author

  • DXtera Integration
  • API Quality Control

Filed Under: Integration, Mulesoft Tagged With: AP Studio, Programming, Version Update

Check out ArganoMS3 Software Engineers Jose Montoya & Joshua Erney in “Mule Made Easy”. In Part 2, of this two-part series, Montoya describes the difference between exception handling in Mule 3 and Mule 4, along with an overview of the newest features. To review Part 1, see here

Filed Under: Integration, Mulesoft Tagged With: Blog, Video

Check out ArganoMS3 Software Engineers Jose Montoya and Joshua Erney in “Mule Made Easy”. In Part 1, of a two-part series, Montoya describes the difference between errors in Mule 3 and Mule 4, along with the new functionality. Stay tuned for Part 2: Exception Handling in Mule 4 which will be airing Thursday [11/1] at noon!

Filed Under: Integration, Mulesoft Tagged With: Blog, Video

Written By: Mark Norton

So, you finally got your API design put together, and the guys in IT have implemented it. They have done a bit of smoke testing and say it’s pretty much working. So, release it to production? Well, maybe. It depends on your quality control strategy. One approach is to release early, often and to use a DevOps approach to deploy the latest version of the API continuously. If your development team iterates quickly, any reported problems will be fixed within days, perhaps even hours.

Many companies, especially large established ones are not comfortable with this approach. They would instead take a more methodical approach to API quality control by running a candidate build through a test phase before releasing to production. That’s what this blog explores. It should also be noted that it is not an either/or choice. You can mix some of the suggestions here into you CI/CD approach, especially since some of them can be automated.

The Importance of Good Requirements

Before you can make a methodical test plan for your candidate APIs, you MUST understand what the API is supposed to do, in detail. The only way to do this is to document all API requirements carefully. These requirements should specifiy:

  • The resources being managed
  • Data structures associated with the resource: request and response bodies
  • How to identify the resource: search parameters, paging, etc.
  • What errors can happen and how will they be reported
  • Logging and analytics support
  • How to handle security: both authentication and authorization
  • Response time, throughput, volume requirements
  • Side effects to updates (if any)

You cannot test for something if you don’t know what it’s supposed to do. All optional elements need to be specified. All constraints should include factors such as cardinality, null-ability, number ranges, enum values, etc. We understand that producing documentation can be a pain, but a little time spent here, creating necessary documentation, will save you from wasting time later.

Write a Test Plan

It’s a good idea to write down how you approach testing a release candidate. These notes don’t need to be a formal document, but if you have test engineers doing the actual testing, it’s good to have a checklist to track the progress before signing off on a release to production.

Consider the following kinds of testing:

  • Smoke testing done by developers
  • Unit testing
  • Functional testing
  • User acceptance testing
  • Performance testing
  • Security Testing

Depending on how rigorous you want to be about testing, your test plan could include some or all of these in your checklist.

Developer Quality

Quality cannot be “tested” into the software. Quality API development starts with programmers using good development practices. Often these practices are described in governance standards and policies, but it also relies on the programmer having a solid understanding of the tools in use, how to write good software, and how to avoid simple mistakes (copy/paste errors, for example). During the coding process, the programmer should constantly check to see if the code written works. This “smoke” testing is not as comprehensive as later testing phases but can catch common problems (compiler errors, malformed data structures, lack of error handling, etc.).

Execution Quality

All APIs should run without errors or exceptions. Often, triggering certain input conditions can be difficult. As such, we test for execution quality inside the API application. APIs are usually broken down into flows, sub-flows, batch tasks, etc. These code units can be evaluated using a Unit Test.

Unit tests set up state data and conditions that would be present when executing a code unit. This set up is called mocking (as it mocks real situations). Mocking requires very close and careful attention to ensure the input state is as realistic as possible. The code unit invokes with the mock inputs to assert the expected output results.

Most development environments support the automatic execution of unit tests at build time. Maven allows the run of unit tests to be specified. If the test fails, the build fails. Unit tests are beneficial when combined with DevOps build scripts.

Functional Quality

Requirements gathered early in API design specify how the API is intended to function. Each request must return a detailed response, and required parameters must be present. Optional parameters, not included, should not cause failure or error. Added optional parameters should not conflict with required or other optional parameters.

Function testing measures the ability of an API to deliver the functionality specified by requirements and a design document. You would complete a function test outside of the API by making calls to it in a controlled environment (QA) with a set of structured requests, including mocked payloads, built, and the results implemented. Some development environments (such as API Pro), support the generation of functional tests, thus reducing development time.

Integrated Quality

Functional tests can do a good job of testing the expected operation of an API, but this doesn’t adequately evaluate how the API will work in the context of the business application. If resources are available, then test the APIs in the context of the intended business application. Testing can be done either manually using a test script or automated using a tool such as Selenium. The success of User Acceptance Tests is the basis for delivery sign-off to the end user. Since business applications are often highly dependent on underlying APIs, UAT’s may not be possible in all cases or may be done late in the development process.

Performance Quality

Proper requirements will specify acceptable response times, process volumes, etc. If conditions are not defined, completing some performance testing is a recommended strategy to help determine how an API will perform under a simulated load. Various tools are available that support performance measurements. Some require embedded extensions to the runtime environment to capture code use coverage and execution times (JMeter, for example). There are also load testing services that will simulate many uses, request volumes, request rates, etc.

Security Quality

Do not ignore security when testing the quality of an API. End users are usually challenged to authenticate themselves. Once verified, a session token of some kind is created and used during subsequent operations. Typically, this data passes through an HTTP header, a specified part of the API design (in the RAML). Inter-application security also needs to be considered (front-end application communicating to the back-end API application), often handled by a client id and secret. Security testing requires the verification that all of these tokens are present and correct.

Other kinds of security testing can include:

  • Penetration tests
  • Data mocking
  • Code and SQL injections
  • Clear text credentials
  • (many others)

As with performance testing, there are third-party services available that can help with security testing.

Summary

Having a good set of requirements and having a solid test plan can go a long way towards establishing a quality assurance process. Some of the testings can be automated, reducing costs and speeding time to release. Give some thought to how to blend these techniques into your current QA strategy.

About The Author

Mark Norton is a senior integration specialist at ArganoMS3 with over 15 years of API design and implementation experience in support of higher education, retail, healthcare, telecommunications, IT, and other vertical markets. As a senior architect, Mark guides the future direction of clients by conducting architect design reviews and consulting on tactical directions. He has experience with MuleSoft, RedHat JBoss, and Java Servlet API Platforms.

More Posts by this Author

  • DXtera Integration

Filed Under: APIs, Integration Tagged With: Best Practices, Implementation, Quality

This will be my 3rd year speaking at the annual Wolfram Technology Conference on programmer training methodology – specifically using Mathematica / Wolfram language. Each year, I’ve presented alongside a new cohort of high school students from Torrey Pines High School. One day back in 2015, I cold-called Abby Brown, a pioneering math teacher at Torrey Pines who’s been teaching her math students Mathematica for years. We’ve been working together ever since.

In 2016, we proposed a plan for a scalable network of extra-curricular high school coding clubs that could develop and support visual simulations of scientific concepts for teachers’ use in the classroom. Such simulations are particularly useful for highlighting abstract conceptual distinctions.

Here’s a simple example showing a rocket trajectory (a la Tsiolkovsky’s rocket equation), LEFT does not use a variable mass model, while RIGHT does. The difference in final altitude is visible at a glance. For beginners, pictures are better than text for understanding the essence of concepts. Interactive pictures (simulations) are even better, because learners can test their mental models by first predicting an outcome and then trying it out and seeing if it’s right.

In 2017, we presented an example of a learning tool that would analyze a user’s practice data for a given skill and return high-resolution, visual feedback. We started with a piano, because it has an existing data interface (MIDI) and it’s a relatable skill for most people, but we are ultimately interested in many other skills (incl. programming). Here’s an example visualization that shows which keys are seeing the most mistakes and which key the student is mistakenly playing instead. A perfect practice session would generate a flat plane.

We selected Mathematica as our training language for beginner programmers for the following reasons. (We wrote more about that here, but here’s a short(er) summary):

  1. It provides almost instantaneous capability to answer real, interesting questions through computation, often in a single line of code. WolframAlpha, the knowledge engine which powers many of the responses on Alexa and Siri is built-in the Wolfram language.
  2. It supports everything you need in a single tool. No libraries. No package managers. Just install, open a notebook, and start writing code.
  3. It’s a cathedral, not a bazaar – and thus, it is more structurally and stylistically coherent than an analogous OSS language like Python. Conceptual “noise” is exponentially more distracting and frustrating to beginners than competent or expert practitioners.
  4. It favors functional over object-oriented programming. OOP tends to quickly demand too much structural complexity for a beginner to follow effectively, so they will attempt to simplify and immediately begin forming bad habits that hinder their future development. Better to learn OOP when the GoF Design Patterns are also conceptually within reach.
  5. In the vein of simplification and accomplishing a lot with a little, Mathematica has entry-level functions that automate things you’d normally have to configure by hand. E.g. a classifier or predictor. Wolfram language (WolfLang) picks the ML algorithm and sets the hyperparameters. Feed in reasonably good labeled data, and you can prototype a useful image recognition app and then deploy it directly to the cloud or instantly generate an API for it – all in 2-3 lines of code. (You can also define your own neural net, when ready).

  1. There are actually a lot of free, cloud-based (no download) resources for interactively learning Wolfram. The puzzles are fairly accessible to learners of any age (even young kids), with curiosity as the driver. A few examples:

6.1.  Elementary Introduction to Wolfram Language (EIWL) (for beginners)
6.2.  Wolfram Programming Lab (fun programming challenges for kids)
6.3.  Fast Introduction for Programmers (for programmers in other languages)
6.4.  Wolfram Language Code Gallery (showcase of cool projects)
6.5.  Wolfram Challenges (interesting questions with forms for code submissions)
6.6.  Wolfram Demonstrations (interactive, visual simulations using WolfLang)
6.7.  And more…

  1. Wolfram may not be the programming language of choice for most production software systems, but it is extremely useful for science and engineering students (if you can do it in Matlab, you can usually do it in Mathematica an easier way). And truly, there is no non-computational discipline anymore – the liberal arts and business schools would also benefit from incorporating more computation into their curricula.

This year, the focus is on the programmer training life cycle, including best practices I’ve internalized (either through study or through trial) in the years I’ve been training students.

Here’s the short-list of topics.

  1. Conceptual Progression – What is the sequence of ideas and associated study and practice materials that you put in front of a novice programmer in order to reliably get them to Competent and then to Expert?
  2. Assessment of Understanding – How do you (teacher/trainer) know what learners understand, what they don’t understand, and what they have misunderstood? (Remembering != Understanding)
  3. Knowledge vs. Insight – How do you stage learning activities to emphasize the essential over the trivial? Tactics can be interpreted much more quickly and effectively after a learner has understood the Strategies.
  4. Teamwork & Collaborative Learning – How do you make good use of the different background knowledge, skills, personalities, and synergies brought to the table by each member of a learning cohort? What’s the on-ramp to collaboration and then how is it sustained?
  5. Cultivating Intrinsic Motivation – How do you draw upon the internal energies of each individual to do their best work? What are the environmental conditions that enable people to act autonomously, per their own vision, without going totally off-track?
  6. Mentorship & Tutoring – What are the behavioral indicators that it’s time to bring in higher-order capabilities to guide the learner to better understanding? When should the learner be left alone? What is an efficient use of scarce training resources?
  7. Timing & Project Checkpoints – How do you stage project checkpoints to keep people on track towards a compelling objective but not overwhelmed in the midst of their obligations.
  8. Accomplishment – How do you put all this training effort to work in order to accomplish something useful? What are practical goals that can also contribute to intrinsic motivation?

I’ll be presenting specific guidance, with examples, for each of the above topics (and possibly more) at this year’s Wolfram Technology Conference, alongside one of my programmer apprentices Isaac Gelman, now a student at the University of Southern California. Specifically, we’ll be presenting at 4:30pm on Thursday, October 18th. Once the video of the presentation is available, I’ll write a detailed follow-up post and link the video.

Until then, happy trails.

Eric Nelson

 

About the Author

Eric Nelson is a Sr. Software Engineer with ArganoMS3, Inc., living in Minneapolis, MN with his wife Alisa, 6yo daughter Freyda, and 4yo son Arthur. Eric received his BS in Electrical Engineering from University of Minnesota and worked in photovoltaics & thin films, both at MN tech start-ups. In 2013, Eric helped found an inner-city charter middle school and taught courses in technology and entrepreneurship. In 2015, he founded his own cloud software consulting firm Augment LLC. And began training student cohorts in software design & development skills. He is now focused on building large-scale, secure, futureproof production software systems for smart brands as a member of the ArganoMS3 family. Meanwhile, he continues his effort to develop a scalable programmer training network for high school and college students. He also serves on the board of Minnesota Innovates, a nonprofit dedicated to cultivating MN technology startups building products in emergent technologies (e.g. AI, AR/VR, Gene-editing, Drones, etc.)

Filed Under: Events, Integration, Team Tagged With: Conference, Education, Presentation, Wolfram

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next Page »
FUTURE PROOF SOFTWARE SOLUTIONS
ArganoMS³ enables organizations to meet today’s software, integration, cloud, and data-related challenges with confidence and ease.

About

  • Why Us
  • Leadership
  • Team
  • Clients
  • We're hiring

Solutions

  • API & Integration
  • Strategy and Optimization
  • Cloud Services
  • Artificial Intelligence

Partners

  • Apigee
  • AWS
  • Kong
  • MuleSoft
  • Red Hat
  • Salesforce
  • UiPath

Popular Links

  • Contact Us
  • Blog
  • White Papers
  • Case Study
COPYRIGHT © 2022 ⬤ ArganoMS³ MOUNTAIN STATE SOFTWARE SOLUTIONS

Copyright © 2023 · MS3 Mountain State Software Solutions · Log in