Azure DevOps – Using APIF-Auto

APIF-Auto, a command line tool that supports automated API Fortress test execution, is the ideal tool for executing tests in an Azure DevOps workflow.   

The pipeline script below serves as a template for creating a step in your Azure DevOps Pipeline for testing your APIs with API Fortress. If you’d like to take a look at the documentation for APIF-Auto, click here.


It’s important to note that this is an 
example of an Azure DevOps Pipeline. Experienced users are free to configure their workflow as best suits their needs.

trigger:
- master
jobs:
- job: 'apif'
pool:
vmImage: 'ubuntu-latest'
strategy:
matrix:
Python37:
python.version: '3.7'

steps:
- script: |
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
displayName: 'Install dependencies'

- script: |
python apif-run.py run-all security -S -f junit -o results/TEST-junit.xml
displayName: 'Run APIF Tests'

- task: PublishTestResults@2
inputs:
testRestultFiles: 'result/junit.xml'
testRunTitle: 'APIF Test Results'
condition: succeededOrFailed()

First, it’s worth mentioning that in this example we have the APIF-Auto files in our Azure DevOps repository. Let’s break down what’s happening in the script above:

  • First, we are defining the OS image we would like to use as the testing environment. In our case we chose the latest Ubuntu which has support for the latest Python version.
  • Next, in the same scope we are defining which version of Python we will be using for the test (Apif-Auto is a Python script)
  • Then, in the part labeled “steps” there are a few things happening:
    • In the first section labeled “script” we are installing “pip,” and then installing the dependencies from our “requirements.txt” file
    • In the second section labeled “script” we are running “apif-run.py” to execute all of the tests in our project called “security”
    • Finally, there is a section labeled “task,” this is where we are evaluating the outputted results from the “apif-run” execution. 
Below is a sample output from the above execution:

Bitbucket – Using APIF-Auto

APIF-Auto, a command line tool that supports automated API Fortress test execution is an ideal tool for executing API Fortress tests in a Bitbucket workflow.   

The pipeline script below serves as a template for creating a stage in your Bitbucket Pipeline for testing your APIs with API Fortress. If you’d like to take a look at the documentation for APIF-Auto, click here.


It’s important to note that this is an 
example of a Bitbucket Pipeline. Experienced users are free to configure their workflow as best suits their needs.

image: python:3.7.3

pipelines:
default:
- step:
caches:
- pip
script: # Modify the commands below to build your repository.
- pip install -r apif-auto-master/requirements.txt
- python apif-auto-master/apif-run.py run-all security -S -f junit -o test-results/junit.xml

First it’s worth mentioning that in this example we have the APIF-Auto files in our Bitbucket repository. Let’s break down what’s happening in the script above:

  • First, we are defining the Docker image for python. We will need this to execute the APIF-Auto python scripts.
  • Next, we are setting up the Bitbucket pipeline steps. We are cacheing “pip” so we don’t need to load it every build.
  • In the “script” section we can see a couple of commands being executed:
    • First is the installing the “requirements.txt” using pip, this will install all the packages defined in the file that are needed to run APIF-Auto.
    • Next we are executing the APIF-Auto tool for running tests. In this example we are executing all tests within the project “security” and outputting the results in JUnit to a folder in the repository named “test-results”, this is one of the acceptable folder names that Bitbucket will automatically parse for test reports.
    • It is worth mentioning that Bitbucket will automatically parse the “.xml” to display the results in your pipeline.
Here is an example output:
 

By using the above workflow, we have a modular method of running API Fortress tests in authenticated mode in our Bitbucket pipeline.

Jenkins – APIF-Auto and Github

APIF-Auto, a command line tool that supports automated API Fortress test execution is an ideal tool for executing API Fortress tests in a Jenkins workflow.   
The pipeline script below serves as a template for creating stages in your Jenkins Pipeline for testing your APIs with API Fortress’ tests that are stored in Github. If you’d like to take a look at the documentation for APIF-Auto, click here.


It’s important to note that this is an 
example of a Jenkins Pipeline. Experienced Jenkins users are free to configure their workflow as best suits their needs.

node {
def mvnHome
stage('Preparation') {
git 'https://github.com/theirish81/temp.git'
}
stage('Build') {

}
stage('API Fortress'){
sh 'python /var/jenkins_home/apif-auto/apif-push.py jenkins_project -r -p testing/apifortress'
sh 'mkdir -p target/apifortress'
sh 'python /var/jenkins_home/apif-auto/apif-run.py run-all jenkins_project -S -f junit -o target/apifortress/junit.xml'
}
stage('Results') {
junit '**/target/apifortress/junit.xml'
}
}

Let’s break down what’s happening in the script above:

  • First, we have the “Preparation” stage, this is where we will define the Github repository where we have the tests stored. 
  • Next, we have the “API Fortress” stage, where a few things are happening:
    • sh 'python /var/jenkins_home/apif-auto/apif-push.py jenkins_project -r -p testing/apifortress' This is the command that will pull the tests from the Github repository we defined in the first step and push them into the API Fortress project “jenkins_project” using the apif-push.py tool.
    • sh 'mkdir -p target/apifortress' This is the command that will create a directory to store the results from our API Fortress test executions. Remember the -p flag! It’ll keep the pipeline from overwriting the directory if it already exists in the future.
    • sh 'python /var/jenkins_home/apif-auto/apif-run.py run-all jenkins_project -S -f junit -o target/apifortress/junit.xml' This is the command that will execute all the tests we pushed into the “jenkins_project” using the apif-run.py tool and store the returned junit test results into the directory we created in the previous step. 
  • Finally, we have the “Results” stage, where we evaluate the junit results to see if the test passed or failed.
By using the above workflow, we have a modular method of running API Fortress tests stored in Github in authenticated mode in our Jenkins pipeline.

GitLab CI – Using APIF-Auto

APIF-Auto, a command line tool that supports automated API Fortress test execution is an ideal tool for executing API Fortress tests in a GitLab CI workflow. The pipeline script below serves as a template for creating a stage in your GitLab Pipeline for testing your APIs with API Fortress. If you’d like to take a look at the documentation for APIF-Auto, click here. It’s important to note that this is an example of a “.gitlab-ci.yml”. Experienced GitLab CI users are free to configure their workflow as best suits their needs. Please mind the yaml formatting.
image: "python 3.7"
before_script:
  - python --version
  - pip install -r requirements.txt
stages:
  - API Fortress
apif:
  stage: API Fortress
  script:
  - python directory/apif-run.py run-all ci_project -S -o output/directory
  Just to note the yaml file could also be configured to make CURL calls directly to the API Fortress API to achieve the same behavior.

Micro Focus ALM – Integrate and See Results

One of the key benefits of API Fortress, is the ability to deeply integrate the platform with the tools you use today. Microfocus ALM (application lifecycle management) is another platform that is easy for us to integrate with. There are two options:
  1. Directly
  2. Through a CI/CD Platform

Direct Integration with Micro Focus ALM

The results of an API Fortress test can be returned directly into ALM. Micro Focus has clear instructions on that method here:: Microfocus ALM Documentation (Add Automated Test Results)

Using a CI/CD Platform

The second method of adding API Fortress test results to Microfocus ALM is by way of CI/CD platform. API Fortress integrates with any CI platform, and when that CI platform is also connected with ALM you can view the results in your Micro Focus instance. First, you need to connect your CI/CD platform to ALM, which is detailed at the ALM docs here. Next, The documentation for adding API Fortress to your CI/CD workflow can be found here: More Details:

Jenkins – Using APIF-Auto

APIF-Auto, a command line tool that supports automated API Fortress test execution is an ideal tool for executing API Fortress tests in a Jenkins workflow.

The pipeline script below serves as a template for creating a stage in your Jenkins Pipeline for testing your APIs with API Fortress. If you’d like to take a look at the documentation for APIF-Auto, click here. It’s important to note that this is an example of a Jenkins Pipeline. Experienced Jenkins users are free to configure their workflow as best suits their needs.

pipeline{
   agent any
stages {
   stage('Execute API Fortress Tests') {
     steps {
       sh 'mkdir -p apifortress-reports'
       sh 'python /Path/to/apif-Auto/directory/apif-run.py run-all demo -S -f junit -o apifortress-reports/apif.xml'
       }
   post {
     always {
       junit "apifortress-reports/"
       }
     }
   }
  }
}

sh ‘mkdir -p apifortress-reports’ Let’s break down what’s going on here! First, we’re telling jenkins to create a new directory called ‘apifortress-reports.’ You can name this directory whatever you’d like, but there are a couple of important notes to remember:
    • First, remember the -p flag! It’ll keep the pipeline from overwriting the directory if it already exists in the future.
  • Second, remember the name! We’re going to need it later.
sh ‘python /Path/to/apif-auto/directory/apif-run.py run-all demo -S -f junit – o   apifortress-reports/apif.xml’ Next, we’re going to execute the actual test execution with API Fortress via APIF-Auto. We’re invoking the tool with python and the path to the tool. We’re passing a run-all argument for the project and credentials defined with the demo config key. The test is executing in sync mode, reporting in junit and outputting to apifortress-reports/apif.xml. If you notice, the output file is being placed in the directory we created in the previous step! junit “apifortress-reports/”    In this final step, we’re publishing the junit test report located in the directory that we established in the first step and called in the second step. By using the above workflow, we have a modular method of running API Fortress tests in authenticated mode in our Jenkins pipeline.

APIF-Auto – The Command-Line Tool

Welcome to the API Fortress Command-Line Tools! The tool itself: https://github.com/apifortress/afcmd/releases The documentation for the API that that tool leverages:  https://apifortressv3.docs.apiary.io/ The tool, or rather, pair of tools, are designed to reduce the amount of legwork that goes into executing or uploading API Fortress tests. The following readme will explain each part of the process. APFCMD allows a user to easily integrate API Fortress testing into other workflows. Example use cases are:
  • Executing API Fortress tests from a CI/CD tool
  • Incorporating API Fortress tests in a Git version control plan.
  • Pushing test code from an IDE to the API Fortress platform.
All of these scenarios, and more, can be accomplished with the tool. Lets take a look at the two major components of the tool:

APIF-RUN

Run allows us to execute tests on the platform and do things with that data. We can run tests via API either in an authenticated or unauthenticated state. By passing credentials, we receive a more verbose test result. We can output this result to a file. We also have access to all of the standard options that API Fortress provides in its API (silent run, dry run, etc.)

RUN EXECUTION FLAGS

  • run-all – RUN ALL – This will execute all of the tests in a chosen project.
  • run-by-tag – RUN BY TAG – This will execute all tests with a selected tag (requires the -t flag to set tag)
  • run-by-id – RUN BY ID – This will execute a test with a specific ID (requires the -i flag to set id)
  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)
ex: to run all of the tests in a specific project, we would use the following command string: python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook

RUN OPTION FLAGS

  • -S – SYNC – This will provide a response body with the result of the test.
  • -f – FORMAT – This will determine the format of the test result output (JSON, JUnit, Bool). REQUIRES SYNC MODE (-S)
  • -d – DRY – This will cause the test run to be a dry run.
  • -s – SILENT – This will cause the test to run in silent mode.
  • -o – OUTPUT – This will write the result of the test to a local file. You must provide the path to the file to be created. Remember your filetype! (.json/.xml)
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • -t – TAG – This is how you pass a tag for RUN BY TAG mode.
  • -i – ID – This is how you pass an ID for RUN BY ID mode.
  • -e – ENVIRONMENT – This is how you pass environmental/override variables. The format is key:value. You can pass multiple sets of environmental variables like so: key:value key1:value1 key2:value2

APIF-PUSH

Push allows us to push tests into API Fortress. When tests are downloaded from the platform, they come as 2 XML files (unit.xml & input.xml). We can use this tool to push those files back to an API Fortress project, either individually or in bulk.

PUSH EXECUTION FLAGS

  • hook – HOOK – This is the webhook of the project you are working with. This can be either an API Fortress URL, or the key from a configuration file (set the path to the config file with the -c tag)

PUSH OPTION FLAGS

  • -p – PATH – This provides the path to the test file you wish to upload. You can pass multiple paths.
  • -r – RECURSIVE – This flag will make the call recursive; It will dive through the directory passed with -p and grab every test in all of its subdirectories.
  • -b – BRANCH – This allows you to specify a Git branch that these test files are attached to. Default is master.
  • -c – CONFIG – This provides the path to a configuration file which can provide webhooks and user credentials. If no path is specified, the program will look for a config.yml in the same directory as it is (./config.yml)
  • -C – CREDENTIALS – This allows you to manually pass user credentials (username:password) (SUPERSEDES CONFIG FILE)
  • T – TAG – This allows you to pass tags to be appended to the test after it is pushed. This will OVERWRITE ANY EXISTING TAGS. Multiple tags can be passed.
  • -t – ADD TAG – This will allow you to add additional tags to a test that already has tags attached.

CONFIGURATION FILE

A configuration file is a YAML file that is formatted as follows:
hooks:
  - key: cool_proj1
    url: https://mastiff.apifortress.com/app/api/rest/v3/A_WEBHOOK
    credentials:
      username: (your username)
      password: (your password)
  - key: uncool_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/ANOTHER_WEBHOOK
    credentials:
      username: (another username)
      password: (another password)
  - key: unauth_proj
    url: https://mastiff.apifortress.com/app/api/rest/v3/JUST_A_WEBHOOK_WITHOUT_CREDENTIALS
test_directory: /tests
Once you create a configuration file, you can pass the path with -c and the key to the data in place of the normal hook URL. If you also pass credentials, they’ll override the credentials in the configuration file. If you don’t include credentials in the config file, you can pass them manually or leave them out entirely.

EXAMPLES

Execute all of the tests in a project and output the results to a JUnit/XML file via an authenticated route: python apif-run.py run-all http://mastiff.apifortress.com/yourWebHook -S -C my@username.com:password1 -f junit -o some/route/results.xml Push all of the tests from a directory and all of its subdirectories to a project: python apif-push.py http://mastiff.apifortress.com/yourWebHook -C my@username.com:password1 -r -p some/directory/with/tests Execute one test in a project by ID, using a config file for credentials and webhook: python apif-run.py run-by-id config_key -c path/to/config/file -i testidhash8924jsdfiwef891

NOTES

  • The order of the optional arguments passed does not matter.
  • Remember, in a bash environment, anything that has a space in it needs to be wrapped in quotes. This goes for paths, filenames, etc.

POST-RECEIVE SCRIPT FOR GIT

This Post-Receive script is meant to assist in the incorporation of API Fortress in your Git workflow. Dropping the file into the hooks directory of your .git file will cause newly committed API Fortress test code to be pushed to the API Fortress platform. The ‘test_directory‘ key in the config.yml will let the scripts know which folder the tests themselves are located in. It will then watch for commits from this folder and push the appropriate code to the platform. Keywords: cicd, jenkins, bamboo, microsoft tfs, team foundation server, gitlab ci/cd, travisci

Bamboo – Integrate API Tests & Results

Passing data from API Fortress to Atlassian Bamboo allows Bamboo users to include API Fortress test results in their CI/CD process.

Step 1: Generating a Webhook

The first step to integrating API Fortress into your CI/CD process is to grab the generated API hook for the project in question. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Please note you need Manager access to generate a webhook. From Settings, click the API Hooks section and generate the hook for your project. The process can be seen in detail in the .gif below. hook

Step 2: Select or Create a Bamboo Project

After we’ve created our webhook, calling it from within Bamboo is a fairly simple process. First, create a new project in Bamboo. You can also add to an existing project from this screen. project

Step 3: Adding an HTTP Call

Next, we need to add an HTTP Call component and enter the webhook we generated. Depending on what you wish the call to API Fortress to trigger, you may append different routing on to the end of the webhook. The API Fortress API Documentation is located here. httpcall

Step 4: Parsing Results

After the request is sent to the API Fortress API, we’ll need to save the JUnit data that’s returned. We do so by adding a JUnit Parser step. junit Once the above steps are completed and saved, the build sequence will make a call to API Fortress upon execution, receive the results of the tests, and parse the results. summary   Keywords: cicd, jenkins, bamboo, microsoft tfs, team foundation server, gitlab ci/cd, travisci

Jenkins – Zephyr Enterprise Integration

Step 1 – Install the Zephyr Enterprise Jenkins Plugin

The first step to exporting data to Zephyr Enterprise is to download and configure the Zephyr Enterprise plugin. From the Jenkins main page, click “Configure” and then “Manage Plugins.” From the “Manage Plugins” window, search for and install “Zephyr Enterprise.” jenkinsAddons

Step 2 – Configure the Zephyr Enterprise Jenkins Plugin

Click the “Configure System” option in the “Manage Jenkins” menu. JenkConfig Scroll down to “Zephyr Server Configuration” and enter your domain and login credentials. Screen Shot 2018-05-29 at 10.30.39 AM Click “Test Configuration.” If the test is successful, your Jenkins is properly configured to communicate with your Zephyr instance.

Step 3 – Generate an API Hook

Next, we need to create an API Fortress Webhook to export the test data to Jenkins. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Note: You need Manager access to generate a Webhook. From Settings, click the API Hooks section and generate the hook for your project. The next step depends on what you’re trying to test. The following steps are going to assume that you wish to run all of the tests in a project. You can also run a single test, or a series of tests with a certain tag. If you would like to learn more about that please contact API Fortress. To import our data into Jenkins as JUnit, we’ll export it in JUnit format using a query parameter. Since we already have our API hook, we just need to add the parameter to do so. As it stands, our API hook is as follows: https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861 The normal command to run all of the tests in the project, per the API Fortress docs is /tests/run-all, so we append this on to the end of the API call. We also need to request JUnit output via query parameters. First, we need to set sync to true, and then we can set format to JUnit. In short, we need to append ?sync=true&format=junit. That gives us the final API call: https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861/tests/run-all?sync=true&format=junit Great! If we make this API call via a browser or a tool like Postman, we can see our results in JUnit.

Step 4 – Execute HTTP Call from Jenkins

From the Jenkins dashboard, let’s create a New Item. Next, we’re going to name and create a Freestyle Project. Click the OK button to proceed. Scroll down the page until you see the “Add Build Step” pulldown menu. Select “HTTP Request.” This option will only be available if you installed the HTTP Request plugin in the previous step. We’re going to paste the API call we created above into the URL line. If we save this configuration, we can run the build and see Jenkins receive our JUnit test results in real time. Next, we’re going to click the “Advanced” button. Scroll to the bottom of the newly opened view and enter a filename of your choice into the “Output Response to File” line.

Step 5 – Publish JUnit Test Results in Jenkins

Now that we’re receiving JUnit data from API Fortress in Jenkins, we need to publish the data so that we can use it further downstream. Click “Add Post-Build Action” and then “Publish JUnit Data.”
In the new window, enter the same filename that we saved our JUnit data to in the API call in the previous step.
Now, we’ve enabled Jenkins to execute API Fortress tests and receive the test data in JUnit format. Next, we’re going to allow it to pass this data on to Zephyr.

Step 6 – Exporting Data to Zephyr

Click “Add Post-Build Action” and select “Publish Test Results to Zephyr Enterprise.” Since we configured the Zephyr plugin in step 2, Zephyr information should populate automatically from your Zephyr Enterprise instance. Select the project, release and cycle of your choice and save the build. Screen Shot 2018-05-29 at 10.30.05 AM Test data will now export to Zephyr every time this project is built. Screen Shot 2018-05-29 at 10.31.14 AM  

Jenkins – Using the API

Step 1 – Install Jenkins HTTP Plugin

Log in to your Jenkins account. First, click “Manage Jenkins,” then click “Manage Plugins.” We’re going to need the HTTP Request plugin. To find the plugins, click the “Available” tab in the Plugins menu and use the filter in the top right corner to search for it.

Step 2 – Generate an API Hook

The first step to integrating API Fortress into your CI/CD process is to grab the generated API hook for the project in question. To do so, head to the Settings panel in API Fortress. This view, seen below, can be accessed from anywhere in the application by clicking the Gear icon in the top right corner of the screen. Please note you need Manager access to generate a webhook. From Settings, click the API Hooks section and generate the hook for your project. The next step depends on what you’re trying to test. The following steps are going to assume that you wish to run all of the tests in a project. You can also run a single test, or a series of tests with a certain tag. If you would like to learn more about that please contact API Fortress. To import our data into Jenkins as JUnit, we’ll export it in JUnit format using a query parameter. Since we already have our API hook, we just need to add the parameter to do so. As it stands, our API hook is as follows: https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861 The normal command to run all of the tests in the project, per the API Fortress docs is /tests/run-all, so we append this on to the end of the API call. We also need to request JUnit output via query parameters. First, we need to set sync to true, and then we can set format to JUnit. In short, we need to append ?sync=true&format=junit. That gives us the final API call: https://mastiff.apifortress.com/app/api/rest/v3/86f81b19-2d29-4879-91d9-6dbb2271fec0861/tests/run-all?sync=true&format=junit Great! If we make this API call via a browser or a tool like Postman, we can see our results in JUnit. We’re almost there.

Step 3 – Execute HTTP Call from Jenkins

From the Jenkins dashboard, let’s create a New Item. Next, we’re going to name and create a Freestyle Project. Click the OK button to proceed. Scroll down the page until you see the “Add Build Step” pulldown menu. Select “HTTP Request.” This option will only be available if you installed the HTTP Request plugin in the previous step. We’re going to paste the API call we created above into the URL line. If we save this configuration, we can run the build and see Jenkins receive our JUnit test results in real time. These test results can then be passed along to platforms like qTest or Zephyr in a CI/CD pipeline.