Davide Moro: Hello pytest-play!

pytest-play is a rec&play (rec not yet available) pytest plugin that let you execute a set of actions and assertions using commands serialized in JSON format. It tries to make test automation more affordable for non programmers or non Python programmers for browser, functional, API, integration or system testing thanks to its pluggable architecture and third party plugins that let you interact with the most common databases and systems.

In addition it provides also some facilitations for writing browser UI actions (e.g., implicit waits before interacting with an input element. The Cypress framework for me was a great source of inspiration) and asynchronous checks (e.g., wait until a certain condition is true).

You can use pytest-play programmatically (e.g., use the pytest-play engine as a library for pytest-play standalone scenarios or using the pytest-play API implementing BDD steps).

Starting from pytest-play>1.4.x it was introduced a new experimental feature that let you use pytest-play as a framework creating Python-free automated tests based on a JSON based serialization format for actions and assertions (in the next future the more user friendly YAML format will be supported).

So now depending on your needs and skills you can choose to use pytest-play as a library or as a framework.

In this article I’m going to show how to implement a Plone CMS based login test using the python-free approach without having to write any line of Python code.

What is pytest-play and why it exists

In this section I’m going to add more information about the pytest-play approach and other considerations: if you want to see now how to implement our Python-free automated login test jump to the next section!

Hyper specialized tool problems

There are many commercial products or tools that offer solutions for API testing only, browser testing only. Sometimes hyper specialized tools might fit your needs (e.g., a content management system based web application) but sometimes they are not helpful for other distributed applications.

For example an API-only platform is not effective for testing a CQRS based application. It is not useful testing only HTTP 200 OK response, you should test that all the expected commands are generated on the event store (e.g., Cassandra) or other side effects.

Another example for an IoT applications and UI/browser only testing platforms. You cannot test reactive web apps only with a browser, you should control also simulated device activities (e.g., MQTT, queues, API) for messages/alarms/reports) or any other web based interactions performed by other users (e.g., HTTP calls); you might need to check asynchronously the expected results on web sockets instead of using a real browser implementing when some actions are performed.

What is pytest-play

In other words pytest-play is an open source testing solution based on the pytest framework that let you:

  • write actions and cross assertions against different protocols and test levels in the same scenario (e.g., check HTTP response and database assertions)
  • minimize the development of Selenium-based asynchronous wait functions before interacting with input elements thanks to implicit wait functions that let you interact with web elements only when they are ready. You just focus on user actions, you are more productive and you reduce the chance of writing bad or not robust asynchronous wait functions
  • implement polling-based asynchronous waiter commands based on custom expressions when needed

using a serialization format (JSON at this time of writing, YAML in the next future) that should be more affordable for non technical testers, non programmers or programmers with no Python knowledge.

Potentially you will be able to share and execute a new scenario not yet included in your test library copying and pasting a pytest-play JSON to a Jenkins build with parameters form like the following one (see the PLAY textarea):

From http://davidemoro.blogspot.it/2018/03/test-automation-python-pytest-jenkins.html

In addition if you are a technical user you can extend it writing your own plugins, you can provide the integration with external tools (e.g., test management tools, software metrics engines, etc), you can decide the test abstraction depending on deadlines/skills/strategy (e.g., use plain json files, a programmatic approach based on json scenarios or BDD steps based on pytest-play).

What pytest-play is not

For example pytest-play doesn’t provide a test scenario recorder but it enforces users to understand what they are doing.

It requires a very very little programming knowledge for writing some assertions using simple code expressions but with a little training activity it is still affordable by non programmers (you don’t have to learn a programming language, just some basic assertions).

It is not feature complete but it is free software.

If you want to know more in this previous article I’ve talked about:

A pytest-play example: parametrized login (featuring Plone CMS)

In this example we’ll see how to write and execute pure json pytest-play scenarios with test data decoupled by the test implementation and test parametrization. I’m using the available online Plone 5 demo site kindly hosted by Andreas Jung (www.zopyx.com).

The project is available here:

The tests could be launched this way as a normal pytest project once you installed pytest and the dependencies (there is a requirements.txt file, see the above link):

$   pytest --variables env-ALPHA.yml --splinter-webdriver firefox --splinter-screenshot-dir /tmp -x

Where the you can have multiple environment/variable files. E.g., env-ALPHA.yml containing the alpha base url and any other variables:

pytest-play:
base_url: https://plone-demo.info

Our login test_login.json scenario contains (as you can see there are NO asynchronous waits because they are not needed for basic examples so you can focus on actions and assertions thanks to implicit waits):

{
"steps": [
{
"comment": "visit base url",
"type": "get",
"url": "$ base_url"
},
{
"comment": "click on login link",
"locator": {
"type": "id",
"value": "personaltools-login"
},
"type": "clickElement"
},
{
"comment": "provide a username",
"locator": {
"type": "id",
"value": "__ac_name"
},
"text": "$ username",
"type": "setElementText"
},
{
"comment": "provide a password",
"locator": {
"type": "id",
"value": "__ac_password"
},
"text": "$ password",
"type": "setElementText"
},
{
"comment": "click on login submit button",
"locator": {
"type": "css",
"value": ".pattern-modal-buttons > input[name=submit]"
},
"type": "clickElement"
},
{
"comment": "wait for page loaded",
"locator": {
"type": "css",
"value": ".icon-user"
},
"type": "waitForElementVisible"
}
]
}

Plus an optional test scenario metadata file test_login.ini that contains pytest keyword and decoupled test data:

[pytest]
markers =
login
test_data =
{"username": "siteadmin", "password": "siteadmin"}
{"username": "editor", "password": "editor"}
{"username": "reader", "password": "reader"}

Thanks to the metadata file you have just one scenario and it will be executed 3 times (as many times as test data rows)!

Et voilà, let’s see it in action out scenario without having to write any line of Python code:
 

There is only a warning I have to remove but it worked and we got exactly 3 different test runs for our login scenario as expected!

pytest-play status

pytest-play should be still considered experimental software and many features needs to be implemented or refactored:

  • yaml instead of json. YAML will become the primary configuration format (it should be more user friendly as suggested by some users)
  • API should not be considered stable until future 2.x version
  • improve API testing when using pure json scenarios registering functions (e.g., invoke a function returning a valid authentication bearer for authenticated API testing)
  • implement some python requests library features not yet implemented in play_requests (e.g., cookies)
  • refactor parametrization and templating (Jinja?)
  • implement additional Selenium actions (e.g., right clicks, upload files, etc)
  • implement other cool Cypress ideas enabling non expert testers in writing more robust Selenium scenarios
  • add page object abstraction in pytest-play based Selenium scenarios with new commands that let you interact with page regions and interact with complex UI widgets
  • ownership change, waiting for pytest-dev core developers approval. Probably soon the ownership will change from davidemoro/pytest-play to pytest-dev/pytest-play once the approvation iter will finish

PyCon Nove @ Florence

If you are going to attending next PyCon Nove in Florence don’t miss the following pytest-play talk presented by Serena Martinetti:

    Do you like pytest-play?

    Tweets about pytest-play happens on @davidemoro.
    Positive or negative feedback is always appreciated. If you find interesting the concepts behind pytest-play let me know with a tweet, add a new pytest-play adapter and/or add a GitHub star if you liked it:

    Star

    Updates

    Planet Python

    Davide Moro: Test automation framework thoughts and examples with Python, pytest and Jenkins

    In this article I’ll share some personal thoughts about Test Automation Frameworks; you can take inspiration from them if you are going to evaluate different test automation platforms or assess your current test automation solution (or solutions).

    Despite it is a generic article about test automation, you’ll find many examples explaining how to address some common needs using the Python based test framework named pytest and the Jenkins automation server: use the information contained here just as a comparison and feel free to comment sharing alternative methods or ideas coming from different worlds.

    It contains references to some well (or less) known pytest plugins or testing libraries too.

    Before talking about automation and test automation framework features and characteristics let me introduce the most important test automation goal you should always keep in mind.

    Test automation goals: ROI

    You invest in automation for a future return of investment.
    Simpler approaches let you start more quickly but in the long term they don’t perform well in terms of ROI and vice versa. In addition the initial complexity due to a higher level of abstraction may produce better results in the medium or long term: better ROI and some benefits for non technical testers too. Have a look at the test automation engineer ISTQB certification syllabus for more information:

    So what I mean is that test automation is not easy: it doesn’t mean just recording some actions or write some automated test procedures because how you decide to automate things affects the ROI. Your test automation strategy should consider your tester technical skills now and future evolutions, considerations about how to improve your system testability (is your software testable?), good test design and architecture/system/domain knowledge. In other words be aware of vendors selling “silver bullet” solutions promising smooth test automation for everyone, especially rec&play solutions: there are no silver bullets.

    Test automation solution features and characteristics

    A test automation solution should be enough generic and flexible, otherwise there is the risk of having to adopt different and maybe incompatible tools for different kind of tests. Try to imagine the mess of having the following situation: one tool or commercial service for browser based tests only based on rec&play, one tool for API testing only, performance test frameworks that doesn’t let you reuse existing scenarios, one tool for BDD only scenarios, different Jenkins jobs with different settings for each different tool, no test management tool integration, etc. A unique solution, if possible, would be better: something that let you choose the level of abstraction and that doesn’t force you. Something that let you start simple and that follow your future needs and the skill evolution of your testers.
    That’s one of the reasons why I prefer pytest over an hyper specialized solution like behave for example: if you combine pytest+pytest-bdd you can write BDD scenarios too and you are not forced to use a BDD only capable test framework (without having the pytest flexibility and tons of additional plugins).

    And now, after this preamble, an unordered list of features or characteristics that you may consider for your test automation solution software selection:

    • fine grained test selection mechanism that allows to be very selective when you have to choose which tests you are going to launch
    • parametrization
    • high reuse
    • test execution logs easy to read and analyze
    • easy target environment switch
    • block on first failure
    • repeat your tests for a given amount of times
    • repeat your tests until a failure occurs
    • support parallel executions
    • provide integration with third party software like test management tools
    • integration with cloud services or browser grids
    • execute tests in debug mode or with different log verbosity
    • support random tests execution order (the order should be reproducible if some problems occur thanks to a random seed if needed)
    • versioning support
    • integration with external metrics engine collectors
    • support different levels of abstraction (e.g., keyword driven testing, BDD, etc)
    • rerun last failed
    • integration with platforms that let you test against a large combination of OS and browsers if needed
    • are you able to extend your solution writing or installing third party plugins?

    Typically a test automation engineer will be able to drive automated test runs using the framework command line interface (CLI) during test development but you’ll find out very soon that you need an automation server for long running tests, scheduled builds, CI and here it comes Jenkins. Jenkins could be used by non technical testers for launching test runs or initialize an environment with some test data.

    Jenkins

    What is Jenkins? From the Jenkins website:

    Continuous Integration and Continuous Delivery. As an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.

    So thanks to Jenkins everyone can launch a parametrized automated test session just using a browser: no command line and nothing installed on your personal computer. So more power to non technical users thanks to Jenkins!

    With Jenkins you can easily schedule recurrent automatic test runs, start remotely via external software some parametrized test runs, implement a CI and many other things. In addition as we will see Jenkins is quite easy to configure and manage thanks to through the web configuration and/or Jenkins pipelines.

    Basically Jenkins is very good at starting builds and generally jobs. In this case Jenkins will be in charge of launching our parametrized automated test runs.

    And now let’s talk a little bit of Python and the pytest test framework.

    Python for testing

    I don’t know if there are some articles talking about statistics on the net about the correlation between Test Automation Engineer job offers and the Python programming language, with a comparison between other programming languages. If you find a similar resource share with me please!

    My personal feeling observing for a while many Test Automation Engineer job offers (or any similar QA job with some automation flavor) is that the Python word is very common. Most of times is one of the nice to have requirements and other times is mandatory.

    Let’s see why the programming language of choice for many QA departments is Python, even for companies that are not using Python for building their product or solutions.

    Why Python for testing

    Why Python is becoming so popular for test automation? Probably because it is more affordable for people with no or little programming knowledge compared to other languages. In addition the Python community is very supportive and friendly especially with new comers, so if you are planning to attend any Python conference be prepared to fall in love with this fantastic community and make new friends (friends, not only connections!). For example at this time of writing you are still in time for attending PyCon Nove 2018 in the beautiful Florence (even better if you like history, good wine, good food and meet great people): 

    You can just compare the most classical hello world, for example with Java:

    public class HelloWorld {

        public static void main(String[] args) {

            System.out.println(“Hello, World!”);

        }

    }

    and compare it with the Python version now:

    print(“Hello, World!”)

    Do you see any difference? If you are trying to explain to a non programmer how to print a line in the terminal window with Java you’ll have to introduce public, static, void, class, System, installing a runtime environment choosing from different version, installing an IDE, running javac, etc and only at the end you will be able to see something printed on the screen. With Python, most of times it comes preinstalled in many distributions, you just focus on what to need to do. Requirements: a text editor and Python installed. If you are not experienced you start with a simple approach and later you can progressively learn more advanced testing approaches.

    And what about test assertions? Compare for example a Javascript based assertions:

    expect(b).not.toEqual(c);

    with the Python version:

    assert b != c

    So no expect(a).not.toBeLessThan(b), expect(c >= d).toBeTruthy() or expect(e).toBeLessThan(f): with Python you just say assert a >= 0 so nothing to remember for assertions!

    Python is a big fat and very powerful programming language but it follows a “pay only for what you eat” approach.

    Why pytest

    If Python is the language of your choice you should consider the pytest framework and its high quality community plugins and I think it is a good starting point for building your own test automation solution.

    The pytest framework (https://docs.pytest.org/en/latest/) makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries.

    Most important pytest features:

    • simple assertions instead of inventing assertion APIs (.not.toEqual or self.assert*)
    • auto discovery test modules and functions
    • effective CLI for controlling what is going to be executed or skipped using expressions
    • fixtures, easy to manage fixtures lifecycle for long-lived test resources and parametrized features make it easy and funny implementing what you found hard and boring with other frameworks
    • fixtures as function arguments, a dependency injection mechanism for test resources
    • overriding fixtures at various levels
    • framework customizations thanks to pluggable hooks
    • very large third party plugins ecosystem

    I strongly suggest to have a look at the pytest documentation but I’d like to make some examples showing something about fixtures, code reuse, test parametrization and improved maintainability of your tests. If you are not a technical reader you can skip this section.

    I’m trying to explain fixtures with practical examples based on answers and questions:

    • When should be created a new instance of our test resource?
      You can do that with the fixture scope (session, module, class, function or more advanced options like autouse). Session means that your test resource will live for the entire session, module/class for all the tests contained in that module or class, with function you’ll have an always fresh instance of your test resource for each test
    • How can I determine some teardown actions at the end of the test resource life?
      You can add a sort of fixture finalizer after the yield line that will be invoked at the end of our test resource lifecycle. For example you can close a connection, wipe out some data, etc.
    • How can I execute all my existing tests using that fixture as many as your fixture configurations?
      You can do that with params. For example you can reuse all your existing tests verifying the integration with different real databases, smtp servers. Or if you have the web application offering the same features deployed with a different look&feel for different brands you can reuse all your existing functional UI tests thanks to pytest’s fixture parametrization and a page objects pattern where for different look&feel I don’t mean only different CSS but different UI components (e.g. completely different datetime widgets or navigation menu), components disposition in page, etc.
    • How can I decouple test implementation and test data? Thanks to parametrize you can decouple them and write just one time your test implementation. Your test will be executed as many times as your different test data

    Here you can see an example of fixture parametrization (the test_smtp will be executed twice because you have 2 different fixture configurations):

    import pytest
    import smtplib

    @pytest.fixture(scope=”module”,
                            params=[“smtp1.com”, “smtp2.org”])
    def smtp(request):
        smtp = smtplib.SMTP(request.param, 587, timeout=5)
        yield smtp
        print(“finalizing %s” % smtp)
        smtp.close()

    def test_smtp(smtp):
        # use smtp fixture (e.g., smtp.sendmail(…))
        # and make some assertions.
        # The same test will be executed twice (2 different params)

        …

     And now an example of test parametrization:

    import pytest
    @pytest.mark.parametrize(“test_input,expected”, [
        (“3+5”, 8),
        (“2+4”, 6),
        (“6*9”, 42), ])
    def test_eval(test_input, expected):
        assert eval(test_input) == expected

    For more info see:

    This is only pytest, as we will see there are many pytest plugins that extend the pytest core features.

    Pytest plugins

    There are hundreds of pytest plugins, the ones I am using more frequently are:

    • pytest-bdd, BDD library for the pytest runner
    • pytest-variables, plugin for pytest that provides variables to tests/fixtures as a dictionary via a file specified on the command line
    • pytest-html, plugin for generating HTML reports for pytest results
    • pytest-selenium, plugin for running Selenium with pytest
    • pytest-splinter, a pytest-selenium alternative based on Splinter. pPytest splinter and selenium integration for anyone interested in browser interaction in tests
    • pytest-xdist, a py.test plugin for test parallelization, distributed testing and loop-on-failures testing modes
    • pytest-testrail, pytest plugin for creating TestRail runs and adding results on the TestRail test management tool
    • pytest-randomly, a pytest plugin to randomly order tests and control random seed (but there are different random order plugins if you search for “pytest random”)
    • pytest-repeat, plugin for pytest that makes it easy to repeat a single test, or multiple tests, a specific number of times. You can repeat a test or group of tests until a failure occurs
    • pytest-play, an experimental rec&play pytest plugin that let you execute a set of actions and assertions using commands serialized in JSON format. Makes test automation more affordable for non programmers or non Python programmers for browser, functional, API, integration or system testing thanks to its pluggable architecture and many plugins that let you interact with the most common databases and systems. It provides also some facilitations for writing browser UI actions (e.g., implicit waits before interacting with an input element) and asynchronous checks (e.g., wait until a certain condition is true)

    Python libraries for testing:

    • PyPOM, python page object model for Selenium or Splinter 
    • pypom_form, a PyPOM abstraction that extends the page object model applied to forms thanks to declarative form schemas

    Scaffolding tools:

    • cookiecutter-qa, generates a test automation project ready to be integrated with Jenkins and with the test management tool TestRail that provides working hello world examples. It is shipped with all the above plugins and it provides examples based on raw splinter/selenium calls, a BDD example and a pytest-play example 
    • cookiecutter-performance, generate a tox based environment based on Taurus bzt for performance test. BlazeMeter ready for distributed/cloud performance tests. Thanks to the bzt/taurus pytest executor you will be able to reuse all your pytest based automated tests for performance tests

    Pytest + Jenkins together

    We’ve discussed about Python, pytest and Jenkins main ingredients for our cocktail recipe (shaken, not stirred). Optional ingredients: integration with external test management tools and selenium grid providers.

    Thanks to pytest and its plugins you have a rich command line interface (CLI); with Jenkins you can schedule automated builds, setup a CI, let not technical users or other stakeholders executing parametrized test runs or building test always fresh test data on the fly for manual testing, etc. You just need a browser, nothing installed on your computer.

    Here you can see how our recipe looks like:

    Now lets comment all our features provided by the Jenkins “build with parameters” graphical interface, explaining option by option when and why they are useful.

    Target environment (ENVIRONMENT)

    In this article we are not talking about regular unit tests, the basis for your testing pyramid. Instead we are talking about system, functional, API, integration, performance tests to be launched against a particular instance of an integrated system (e.g., dev, alpha or beta environments).

    You know, unit tests are good they are not sufficient: it is important to verify if the integrated system (sometimes different complex systems developed by different teams under the same or third party organizations) works fine as it is supposed to do. It is important because it might happen that 100% unit tested systems doesn’t play well after the integration for many different reasons. So with unit tests you take care about your code quality, with higher test levels you take care about your product quality. Thanks to these tests you can confirm an expected product behavior or criticize your product.

    So thanks to the ENVIRONMENT option you will be able to choose one of the target environments. It is important to be able to reuse all your tests and launch them against different environments without having to change your testware code. Under the hood the pytest launcher will be able to switch between different environments thanks to the pytest-variables parametrization using the –variables command line option, where each available option in the ENVIRONMENT select element is bound to a variables files (e.g., DEV.yml, ALPHA.yml, etc) containing what the testware needs to know about the target environment.

    Generally speaking you should be able to reuse your tests without any modification thanks to a parametrization mechanism.If your test framework doesn’t let you change target environment and it forces you to modify your code, change framework.

    Browser settings (BROWSER)

    This option makes sense only if you are going to launch browser based tests otherwise it will be ignored for other type of tests (e.g., API or integration tests).

    You should be able to select a particular version of browser (latest or a specific version) if any of your tests require a real browser (not needed for API tests just for making one example) and preferably you should be able to integrate with a cloud system that allows you to use any combination of real browsers and OS systems (not only a minimal subset of versions and only Firefox and Chrome like several test platforms online do). Thanks to the BROWSER option you can choose which browser and version use for your browser based tests. Under the hood the pytest launcher will use the –variables command line option provided by the pytest-variables plugin, where each option is bound to a file containing the browser type, version and capabilities (e.g., FIREFOX.yml, FIREFOX-xy.yml, etc). Thanks to pytest, or any other code based testing framework, you will be able to combine browser interactions with non browser actions or assertions.

    A lot of big fat warnings about rec&play online platforms for browser testing or if you want to implement your testing strategy using only or too many browser based tests. You shouldn’t consider only if they provide a wide range of OS and versions, the most common browsers. They should let you perform also non browser based actions or assertions (interaction with queues, database interaction, http POST/PUT/etc calls, etc). What I mean is that sometimes only a browser is not sufficient for testing your system: it might be good for a CMS but if you are testing an IoT platform you don’t have enough control and you will write completely useless tests or low value tests (e.g., pure UI checks instead of testing reactive side effects depending on eternal triggers, reports, device activity simulations causing some effects on the web platform under test, etc).

    In addition be aware that some browser based online testing platforms doesn’t use Selenium for their browser automation engine under the hood. For example during a software selection I found an online platform using some Javascript injection for implementing user actions interaction inside the browser and this might be very dangerous. For example let’s consider a login page that takes a while before the input elements become ready for accepting the user input when some conditions are met. If for some reasons a bug will never unlock the disabled login form behind a spinner icon, your users won’t be able to login to that platform. Using Selenium you’ll get a failing result in case of failure due to a timeout error (the test will wait for elements won’t never be ready to interact with and after few seconds it will raise an exception) and it’s absolutely correct. Using that platform the test was green because under the hood the input element interaction was implemented using DOM actions with the final result of having all your users stuck: how can you trust such platform?

    OS settings (OS)

    This option is useful for browser based tests too. Many Selenium grid vendors provide real browser on real OS systems and you can choose the desired combination of versions.

    Resolution settings (RESOLUTION)

    Same for the above options, many vendor solutions let you choose the desired screen resolution for automated browser based testing sessions.

    Select tests by names expressions (KEYWORDS)

    Pytest let you select the tests you are going to launch selecting a subset of tests that matches a pattern language based on test and module names.

    For example I find very useful to add the test management tool reference in test names, this way you will be able to launch exactly just that test:

    c93466

    Or for example all test names containing the login word but not c92411:

    login and not c92411

    Or if you organize your tests in different modules you can just specify the folder name and you’ll select all the tests that live under that module:

    api

    Under the hood the pytest command will be launched with -k “EXPRESSION”, for example

    -k “c93466”

    It is used in combination with markers, a sort of test tags.

    Select tests to be executed by tag expressions (MARKERS)

    Markers can be used alone or in conjunction with keyword expressions. They are a sort of tag expression that let you select just the minimum set of tests for your test run.

    Under the hood the pytest launcher uses the command line syntax -m “EXPRESSION”.

    For example you can see a marker expression that selects all tests marked with the edit tag excluding the ones marked with CANBusProfileEdit:

    edit and not CANBusProfileEdit

    Or execute only edit negative tests: 

    edit and negative

    Or all integration tests

    integration

    It’s up to you creating granular keywords for features and all you need for select your tests (e.g., functional, integration, fast, negative, ci, etc).

    Test management tool integration (TESTRAIL_ENABLE)

    All my tests are decorated with the test case identifier provided by the test management tool, in my company we are using TestRail.

    If this option is enabled the test results of executed tests will be reported in the test management tool.

    Implemented using the pytest-testrail plugin.

    Enable debug mode (DEBUG)

    The debug mode enables verbose logging.

    In addition for browser based tests open selenium grid sessions activating debug capabilities options (https://www.browserstack.com/automate/capabilities). For example verbose browser console logs, video recordings, screenshots for each step, etc. In my company we are using a local installation of Zalenium and BrowserStack automate.

    Block on first failure (BLOCK_FIRST_FAILURE)

    This option is very useful for the following needs:

    • a new build was deployed and you want to stop on the very first failure for a subset of sanity/smoke tests
    • you are launching repeated, long running, parallel tests and you want to block on first failure

    The first usage let you gain confidence with a new build and you want to stop on the very first failure for analyzing what happened.

    The second usage is very helpful for:

    • random problems (playing with number of repeated executions, random order and parallelism you can increase the probability of reproducing a random problem in less time)
    • memory leaks
    • testing system robustness, you can stimulate your system running some integration tests sequentially and then augment the parallelism level until your local computer is able to sustain the load. For example launching 24+ parallel integration tests on a simple laptop with pytest running on virtual machine is still fine. If you need something more heavy you can use distribuited pytest-xdist sessions or scale more with BlazeMeter

    As you can imagine you may combine this option with COUNT, PARALLEL_SESSIONS, RANDOM_ENABLE and DEBUG depending on your needs. You can test your tests robustness too.

    Under the hood implemented using the pytest’s -x option.

    Parallel test executions (PARALLEL_SESSIONS)

    Under the hood implemented with pytest-xdist’s command line option called -n NUM and let you execute your tests with the desired parallelism level.

    pytest-xdist is very powerful and provides more advanced options and network distributed executions. See https://github.com/pytest-dev/pytest-xdist for further options.

    Switch from different selenium grid providers (SELENIUM_GRID_URL)

    For browser based testing by default your tests will be launched on a remote grid URL. If you don’t touch this option the default grid will be used (a local Zalenium or any other provider) but in case of need you can easily switch provider without having to change nothing in your testware.

    If you want you can save money maintaining and using a local Zalenium as default option; Zalenium can be configured as a selenium grid router that will dispatch capabilities that it is not able to satisfy. This way you will be able to save money and augment a little bit the parallelism level without having to change plan.

    Repeat test execution for a given amount of times (COUNT)

    Already discussed before, often used in conjunction with BLOCK_FIRST_FAILURE (pytest core -x option)

    If you are trying to diagnose an intermittent failure, it can be useful to run the same test or group of tests over and over again until you get a failure. You can use py.test’s -x option in conjunction with pytest-repeat to force the test runner to stop at the first failure.

    Based on pytest-repeat’s –count=COUNT command line option.

    Enable random test ordering execution (RANDOM_ENABLE)

    This option enables random test execution order.

    At the moment I’m using the pytest-randomly plugin but there are 3 or 4 similar alternatives I have to try out.

    By randomly ordering the tests, the risk of surprising inter-test dependencies is reduced.

    Specify a random seed (RANDOM_SEED)

    If you get a failure executing a random test, it should be possible to reproduce systematically rerunning the same tests order with the same test data.

    Always from the pytest-randomly readme:

    By resetting the random seed to a repeatable number for each test, tests can create data based on random numbers and yet remain repeatable, for example factory boy’s fuzzy values. This is good for ensuring that tests specify the data they need and that the tested system is not affected by any data that is filled in randomly due to not being specified.

    Play option (PLAY)

    This option will be discussed in a dedicated blog post I am going to write.

    Basically you are able to paste a JSON serialization of actions and assertions and the pytest runner will be able to execute your test procedure.

    You need just a computer with a browser for running any test (API, integration, system, UI, etc). You can paste how to reproduce a bug on a JIRA bug and everyone will be able to paste it on the Jenkins build with parameters form.

    See pytest-play for further information.

    If you are going to attending next Pycon in Florence don’t miss the following pytest-play talk presented by Serena Martinetti:

    UPDATES:

      How to create a pytest project

      If you are a little bit curious about how to install pytest or create a pytest runner with Jenkins you can have a look at the following scaffolding tool:

      It provides a hello world example that let you start with the test technique more suitable for you: plain selenium scripts, BDD or pytest-play JSON test procedures. If you want you can install page objects library. So you can create a QA project in minutes.

      Your QA project will be shipped with a Jenkinsfile file that requires a tox-py36 docker executor that provides a python3.6 environment with tox already installed; unfortunately tox-py36 is not yet public so you should implement it by your own at the moment.
      Once you provide a tox-py36 docker executor the Jenkinsfile will create for you the build with parameters Jenkins form for you automatically on the very first Jenkins build for your project.

      Conclusions

      I hope you’ll find some useful information in this article: nice to have features for test frameworks or platform, a little bit of curiosity for the Python world or  new pytest plugin you never heard about.

      Feedback and contributions are always welcome.

      Tweets about test automation and new articles happens here:

      Planet Python

      Davide Moro: High quality automated docker hub push using Github, TravisCI and pyup for Python tool distributions

      Let’s say you want to distribute a Python tool with docker using known good dependency versions ready to be used by end users… In this article you will see how to continuously keeping up to date a Docker Hub container with minimal managing effort (because I’m a lazy guy) using github, TravisCI and pyup.

      The goal was to reduce as much as possible any manual activity for updates, check all works fine before pushing, minimize build times and keep docker container always secure and updated with a final high quality confidence.

      As an example let’s see what happens under the hood behind every pytest-play Docker Hub update on the official container https://cloud.docker.com/u/davidemoro/repository/docker/davidemoro/pytest-play (by the way if you are a pytest-play user: did you know that you can use Docker for running pytest-play and that there is a docker container ready to be used on Docker Hub? See a complete and working example here https://davidemoro.blogspot.com/2019/02/api-rest-testing-pytest-play-yaml-chuck-norris.html)

      Repositories

      The docker build/publish stuff lives on another repository, so https://github.com/davidemoro/pytest-play-docker is the repository that implements the Docker releasing workflow for https://github.com/pytest-dev/pytest-play on Docker Hub (https://hub.docker.com/r/davidemoro/pytest-play).

      Workflow

      This is the highly automated workflow at this time of writing for the pytest-play publishing on Docker Hub:

      All tests executions run against the docker build so there is a warranty that what is pushed to Docker Hub works fine (it doesn’t check only that the build was successful but it runs integration tests against the docker build), so no versions incompatibilities, no integration issues between all the integrated third party pytest-play plugins and no issues due to the operative system integration (e.g., I recently experienced an issue on alpine linux with a pip install psycopg2-binary that apparently worked fine but if you try to import psycopg2 inside your code you get an unexpected import error due to a recent issue reported here https://github.com/psycopg/psycopg2/issues/684).

      So now every time you run a command like the following one (see a complete and working example here https://davidemoro.blogspot.com/2019/02/api-rest-testing-pytest-play-yaml-chuck-norris.html):

      docker run –rm -v $ (pwd):/src davidemoro/pytest-play

      you know what was the workflow for every automated docker push for pytest-play.

      Acknowledgements

      Many thanks to Andrea Ratto for the 10 minutes travis build speedup due to Docker cache, from ~11 minutes to ~1 minute is a huge improvement indeed! It was possible thanks to the docker pull davidemoro/pytest-play command, the build with the –cache-from davidemoro/pytest-play option and running the longest steps in a separate and cacheable step (e.g., the very very long cassandra-driver compilation moved to requirements_cassandra.txt will be executed only if necessary).

      Relevant technical details about pytest-play-docker follows (some minor optimizations are still possible saving in terms of final size).

      pytest-play-docker/.travis.yml

      sudo: required
      services:
      – docker
      – …

      env:
        global:
        – IMAGE_NAME=davidemoro/pytest-play
        – secure: …
      before_script:
      – …

      script:
      – travis_wait docker pull python:3.7
      – travis_wait docker pull “$ IMAGE_NAME:latest”
      – travis_wait 25 docker build –cache-from “$ IMAGE_NAME:latest” -t “$ IMAGE_NAME” .
      – docker run -i –rm -v $ (pwd)/tests:/src –network host -v /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock $ IMAGE_NAME –splinter-webdriver=remote
        –splinter-remote-url=$ REMOTE_URL
      deploy:
        provider: script
        script: bash docker_push
        on:
          branch: master

      pytest-play-docker/docker_push

      #!/bin/bash
      echo “$ DOCKER_PASSWORD” | docker login -u “$ DOCKER_USERNAME” –password-stdin
      docker tag “$ IMAGE_NAME” “$ IMAGE_NAME:$ TRAVIS_COMMIT”
      docker tag “$ IMAGE_NAME” “$ IMAGE_NAME:latest”
      docker push “$ IMAGE_NAME”:”$ TRAVIS_COMMIT”
      docker push “$ IMAGE_NAME”:latest

      Feedback

      Any feedback will be always appreciated.

      Do you like the Docker hub push process for pytest-play? Let me know becoming a pytest-play stargazer! Star
      Planet Python

      Davide Moro: API/REST testing like Chuck Norris with pytest play using YAML

      In this article we will see how to write HTTP API tests with pytest using YAML files thanks to pytest-play >= 2.0.0 (pytest-play provides support for Selenium, MQTT, SQL and more. See third party pytest-play plugins).

      The guest star is Chuck Norris thanks to the public JSON endpoint available at https://api.chucknorris.io/ so you will be able to run your test by your own following this example.

      Obviously this is a joke because Chuck Norris cannot fail so tests are not needed.

      Prerequisites and installation

      Installation is not needed, the only prerequisite is Docker thanks to https://hub.docker.com/r/davidemoro/pytest-play.

      Inside the above link you’ll find the instructions needed for installing Docker for any platform.

      If you want to run this example without docker install pytest-play with the external plugin play_requests based on the fantastic requests library (play_requests is already included in docker container).

      Project structure

      You need:

      • a folder (e.g., chuck-norris-api-test)
      • one or more test_XXX.yml files containing your steps (test_ and .yml extension matter)

      For example:

      As you can see each scenario will be repeated for any item you provide in test_data structure.

      The first example asserts that the categories list contains some values against this endpoint https://api.chucknorris.io/jokes/categories; the second example shows how to search for category (probably Chuck Norris will find you according to this Chuck Norris fact “You don’t find Chuck Norris, Chuck Norris finds you!“)

      Alternatively you can checkout this folder:

      Usage

      Visit the project folder and run the following command line command:

      docker run –rm -v $ (pwd):/src davidemoro/pytest-play

      You can append extra standard pytest variables like -x, –pdb and so on. See  https://docs.pytest.org/en/latest/

      Homeworks

      It’s time to show off with a GET roundhouse kick! Ping me on twitter @davidemoro sharing your pytest-play implementation against the random Chuck Norris fact generator by category!

      GET https://api.chucknorris.io/jokes/random?category=dev

      {
          “category”: [“dev”],
          “icon_url”: “https:\/\/assets.chucknorris.host\/img\/avatar\/chuck-norris.png”,
          “id”: “yrvjrpx3t4qxqmowpyvxbq”,
          “url”: “https:\/\/api.chucknorris.io\/jokes\/yrvjrpx3t4qxqmowpyvxbq”,
          “value”: “Chuck Norris protocol design method has no status, requests or responses, only commands.”
      }

      Do you like pytest-play?

      Let’s get in touch for any suggestion, contribution or comments. Contributions will be very appreciated too!
      Star
      Planet Python