Configuring execution

This section explains different command line options that can be used for configuring the `test execution`_ or `post-processing outputs`_. Options related to generated output files are discussed in the `next section`__.

Selecting test cases

Robot Framework offers several command line options for selecting which test cases to execute. The same options also work when post-processing outputs with Rebot_.

By test suite and test case names

Test suites and test cases can be selected by their names with the command line options --suite (-s) and --test (-t), respectively. Both of these options can be used several times to select several test suites or cases. Arguments to these options are case- and space-insensitive, and there can also be `simple patterns`_ matching multiple names. If both the --suite and --test options are used, only test cases in matching suites with matching names are selected.

--test Example
--test mytest --test yourtest
--test example*
--test mysuite.mytest
--test *.suite.mytest
--suite example-??
--suite mysuite --test mytest --test your*

Using the --suite option is more or less the same as executing only the appropriate test case file or directory. One major benefit is the possibility to select the suite based on its parent suite. The syntax for this is specifying both the parent and child suite names separated with a dot. In this case, the possible setup and teardown of the parent suite are executed.

--suite parent.child
--suite myhouse.myhousemusic --test jack*

Selecting individual test cases with the --test option is very practical when creating test cases, but quite limited when running tests automatically. The --suite option can be useful in that case, but in general, selecting test cases by tag names is more flexible.

By tag names

It is possible to include and exclude test cases by tag_ names with the --include (-i) and --exclude (-e) options, respectively. If the --include option is used, only test cases having a matching tag are selected, and with the --exclude option test cases having a matching tag are not. If both are used, only tests with a tag matching the former option, and not with a tag matching the latter, are selected.

--include example
--exclude not_ready
--include regression --exclude long_lasting

Both --include and --exclude can be used several times to match multiple tags. In that case a test is selected if it has a tag that matches any included tags, and also has no tag that matches any excluded tags.

In addition to specifying a tag to match fully, it is possible to use `tag patterns`_ where * and ? are wildcards and AND, OR, and NOT operators can be used for combining individual tags or patterns together:

--include feature-4?
--exclude bug*
--include fooANDbar
--exclude xxORyyORzz
--include fooNOTbar

Selecting test cases by tags is a very flexible mechanism and allows many interesting possibilities:

  • A subset of tests to be executed before other tests, often called smoke tests, can be tagged with smoke and executed with --include smoke.
  • Unfinished test can be committed to version control with a tag such as not_ready and excluded from the test execution with --exclude not_ready.
  • Tests can be tagged with sprint-<num>, where <num> specifies the number of the current sprint, and after executing all test cases, a separate report containing only the tests for a certain sprint can be generated (for example, rebot --include sprint-42 output.xml).

Re-executing failed test cases

Command line option --rerunfailed (-R) can be used to select all failed tests from an earlier `output file`_ for re-execution. This option is useful, for example, if running all tests takes a lot of time and one wants to iteratively fix failing test cases.

robot tests                             # first execute all tests
robot --rerunfailed output.xml tests    # then re-execute failing

Behind the scenes this option selects the failed tests as they would have been selected individually with the --test option. It is possible to further fine-tune the list of selected tests by using --test, --suite, --include and --exclude options.

Using an output not originating from executing the same tests that are run now causes undefined results. Additionally, it is an error if the output contains no failed tests. Using a special value NONE as the output is same as not specifying this option at all.


Re-execution results and original results can be `merged together`__ using the --merge command line option.


Re-executing failed tests is a new feature in Robot Framework 2.8. Prior Robot Framework 2.8.4 the option was named --runfailed. The old name still works, but it will be removed in the future.

When no tests match selection

By default when no tests match the selection criteria test execution fails with an error like:

[ ERROR ] Suite 'Example' with includes 'xxx' contains no test cases.

Because no outputs are generated, this behavior can be problematic if tests are executed and results processed automatically. Luckily a command line option --RunEmptySuite can be used to force the suite to be executed also in this case. As a result normal outputs are created but show zero executed tests. The same option can be used also to alter the behavior when an empty directory or a test case file containing no tests is executed.

Similar situation can occur also when processing output files with Rebot_. It is possible that no test match the used filtering criteria or that the output file contained no tests to begin with. By default executing Rebot fails in these cases, but it has a separate --ProcessEmptySuite option that can be used to alter the behavior. In practice this option works the same way as --RunEmptySuite when running tests.


--ProcessEmptySuite option was added in Robot Framework 2.7.2.

Setting criticality

The final result of test execution is determined based on critical tests. If a single critical test fails, the whole test run is considered failed. On the other hand, non-critical test cases can fail and the overall status is still considered passed.

All test cases are considered critical by default, but this can be changed with the --critical (-c) and --noncritical (-n) options. These options specify which tests are critical based on tags_, similarly as --include and --exclude are used to select tests by tags. If only --critical is used, test cases with a matching tag are critical. If only --noncritical is used, tests without a matching tag are critical. Finally, if both are used, only test with a critical tag but without a non-critical tag are critical.

Both --critical and --noncritical also support same `tag patterns`_ as --include and --exclude. This means that pattern matching is case, space, and underscore insensitive, * and ? are supported as wildcards, and AND, OR and NOT operators can be used to create combined patterns.

--critical regression
--noncritical not_ready
--critical iter-* --critical req-* --noncritical req-6??

The most common use case for setting criticality is having test cases that are not ready or test features still under development in the test execution. These tests could also be excluded from the test execution altogether with the --exclude option, but including them as non-critical tests enables you to see when they start to pass.

Criticality set when tests are executed is not stored anywhere. If you want to keep same criticality when `post-processing outputs`_ with Rebot, you need to use --critical and/or --noncritical also with it:

# Use rebot to create new log and report from the output created during execution
robot --critical regression --outputdir all tests.robot
rebot --name Smoke --include smoke --critical regression --outputdir smoke all/output.xml

# No need to use --critical/--noncritical when no log or report is created
robot --log NONE --report NONE tests.robot
rebot --critical feature1 output.xml

Setting metadata

Setting the name

When Robot Framework parses test data, `test suite names are created from file and directory names`__. The name of the top-level test suite can, however, be overridden with the command line option --name (-N). Underscores in the given name are converted to spaces automatically, and words in the name capitalized.

Setting the documentation

In addition to `defining documentation in the test data`__, documentation of the top-level suite can be given from the command line with the option --doc (-D). Underscores in the given documentation are converted to spaces, and it may contain simple `HTML formatting`_.

Setting free metadata

`Free test suite metadata`_ may also be given from the command line with the option --metadata (-M). The argument must be in the format name:value, where name the name of the metadata to set and value is its value. Underscores in the name and value are converted to spaces, and the latter may contain simple `HTML formatting`_. This option may be used several times to set multiple metadata.

Setting tags

The command line option --settag (-G) can be used to set the given tag to all executed test cases. This option may be used several times to set multiple tags.

Configuring where to search libraries and other extensions

When Robot Framework imports a `test library`__, listener, or some other Python based extension, it uses the Python interpreter to import the module containing the extension from the system. The list of locations where modules are looked for is called the module search path, and its contents can be configured using different approaches explained in this section. When importing Java based libraries or other extensions on Jython, Java classpath is used in addition to the normal module search path.

Robot Framework uses Python's module search path also when importing `resource and variable files`_ if the specified path does not match any file directly.

The module search path being set correctly so that libraries and other extensions are found is a requirement for successful test execution. If you need to customize it using approaches explained below, it is often a good idea to create a custom `start-up script`_.

Locations automatically in module search path

Python interpreters have their own standard library as well as a directory where third party modules are installed automatically in the module search path. This means that test libraries `packaged using Python's own packaging system`__ are automatically installed so that they can be imported without any additional configuration.


Python, Jython and IronPython read additional locations to be added to the module search path from PYTHONPATH, JYTHONPATH and IRONPYTHONPATH environment variables, respectively. If you want to specify more than one location in any of them, you need to separate the locations with a colon on UNIX-like machines (e.g. /opt/libs:$HOME/testlibs) and with a semicolon on Windows (e.g. D:libs;%HOMEPATH%testlibs).

Environment variables can be configured permanently system wide or so that they affect only a certain user. Alternatively they can be set temporarily before running a command, something that works extremely well in custom `start-up scripts`_.


Prior to Robot Framework 2.9, contents of PYTHONPATH environment variable were added to the module search path by the framework itself when running on Jython and IronPython. Nowadays that is not done anymore and JYTHONPATH and IRONPYTHONPATH must be used with these interpreters.

Using --pythonpath option

Robot Framework has a separate command line option --pythonpath (-P) for adding locations to the module search path. Although the option name has the word Python in it, it works also on Jython and IronPython.

Multiple locations can be given by separating them with a colon, regardless the operating system, or by using this option several times. The given path can also be a glob pattern matching multiple paths, but then it typically needs to be escaped__.


--pythonpath libs
--pythonpath /opt/
--pythonpath mylib.jar --pythonpath lib/STAR.jar --escape star:STAR

Configuring sys.path programmatically

Python interpreters store the module search path they use as a list of strings in sys.path attribute. This list can be updated dynamically during execution, and changes are taken into account next time when something is imported.

Java classpath

When libraries implemented in Java are imported with Jython, they can be either in Jython's normal module search path or in Java classpath. The most common way to alter classpath is setting the CLASSPATH environment variable similarly as PYTHONPATH, JYTHONPATH or IRONPYTHONPATH. Alternatively it is possible to use Java's -cp command line option. This option is not exposed to the robot `runner script`_, but it is possible to use it with Jython by adding -J prefix like jython -J-cp example.jar -m tests.robot.

When using the standalone JAR distribution, the classpath has to be set a bit differently, due to the fact that java -jar command does support the CLASSPATH environment variable nor the -cp option. There are two different ways to configure the classpath:

java -cp lib/testlibrary.jar:lib/app.jar:robotframework-2.9.jar org.robotframework.RobotFramework tests.robot
java -Xbootclasspath/a:lib/testlibrary.jar:lib/app.jar -jar robotframework-2.9.jar tests.robot

Setting variables

Variables_ can be set from the command line either individually__ using the --variable (-v) option or through `variable files`_ with the --variablefile (-V) option. Variables and variable files are explained in separate chapters, but the following examples illustrate how to use these options:

--variable name:value
--variable OS:Linux --variable IP:
--variablefile path/to/
--variable ENVIRONMENT:Windows --variablefile c:\resources\

Dry run

Robot Framework supports so called dry run mode where the tests are run normally otherwise, but the keywords coming from the test libraries are not executed at all. The dry run mode can be used to validate the test data; if the dry run passes, the data should be syntactically correct. This mode is triggered using option --dryrun.

The dry run execution may fail for following reasons:

  • Using keywords that are not found.
  • Using keywords with wrong number of arguments.
  • Using user keywords that have invalid syntax.

In addition to these failures, normal `execution errors`__ are shown, for example, when test library or resource file imports cannot be resolved.


The dry run mode does not validate variables. This limitation may be lifted in the future releases.

Randomizing execution order

The test execution order can be randomized using option --randomize <what>[:<seed>], where <what> is one of the following:

Test cases inside each test suite are executed in random order.
All test suites are executed in a random order, but test cases inside suites are run in the order they are defined.
Both test cases and test suites are executed in a random order.
Neither execution order of test nor suites is randomized. This value can be used to override the earlier value set with --randomize.

Starting from Robot Framework 2.8.5, it is possible to give a custom seed to initialize the random generator. This is useful if you want to re-run tests using the same order as earlier. The seed is given as part of the value for --randomize in format <what>:<seed> and it must be an integer. If no seed is given, it is generated randomly. The executed top level test suite automatically gets metadata__ named Randomized that tells both what was randomized and what seed was used.


robot --randomize tests my_test.robot
robot --randomize all:12345 path/to/tests

Programmatic modification of test data

If the provided built-in features to modify test data before execution are not enough, Robot Framework 2.9 and newer provide a possible to do custom modifications programmatically. This is accomplished by creating a model modifier and activating it using the --prerunmodifier option.

Model modifiers should be implemented as visitors that can traverse through the executable test suite structure and modify it as needed. The visitor interface is explained as part of the `Robot Framework API documentation <visitor interface_>`_, and it possible to modify executed `test suites <running.TestSuite_>`_, `test cases <running.TestCase_>`_ and `keywords <running.Keyword_>`_ using it. The example below ought to give an idea of how model modifiers can be used and how powerful this functionality is.


When a model modifier is taken into use on the command line using the --prerunmodifier option, it can be specified either as a name of the modifier class or a path to the modifier file. If the modifier is given as a class name, the module containing the class must be in the module search path, and if the module name is different than the class name, the given name must include both like module.ModifierClass. If the modifier is given as a path, the class name must be same as the file name. For most parts this works exactly like when `specifying a test library to import`__.

If a modifier requires arguments, like the example above does, they can be specified after the modifier name or path using either a colon (:) or a semicolon (;) as a separator. If both are used in the value, the one first is considered the actual separator.

For example, if the above model modifier would be in a file, it could be used like this:

# Specify the modifier as a path. Run every second test.
robot --prerunmodifier path/to/ tests.robot

# Specify the modifier as a name. Run every third test, starting from the second.
# must be in the module search path.
robot --prerunmodifier SelectEveryXthTest:3:1 tests.robot

If more than one model modifier is needed, they can be specified by using the --prerunmodifier option multiple times. If similar modifying is needed before creating results, `programmatic modification of results`_ can be enabled using the --prerebotmodifier option.

Controlling console output

There are various command line options to control how test execution is reported on the console.

Console output type

The overall console output type is set with the --console option. It supports the following case-insensitive values:

Every test suite and test case is reported individually. This is the default.
Only show . for passed test, f for failed non-critical tests, F for failed critical tests, and x for tests which are skipped because `test execution exit`__. Failed critical tests are listed separately after execution. This output type makes it easy to see are there any failures during execution even if there would be a lot of tests.
No output except for `errors and warnings`_.
No output whatsoever. Useful when creating a custom output using, for example, listeners_.

Separate convenience options --dotted (-.) and --quiet are shortcuts for --console dotted and --console quiet, respectively.


robot --console quiet tests.robot
robot --dotted tests.robot


--console, --dotted and --quiet are new options in Robot Framework 2.9. Prior to that the output was always the same as in the current verbose mode.

Console width

The width of the test execution output in the console can be set using the option --consolewidth (-W). The default width is 78 characters.


On many UNIX-like machines you can use handy $COLUMNS environment variable like --consolewidth $COLUMNS.


Prior to Robot Framework 2.9 this functionality was enabled with --monitorwidth option that was first deprecated and is nowadays removed. The short option -W works the same way in all versions.

Console colors

The --consolecolors (-C) option is used to control whether colors should be used in the console output. Colors are implemented using ANSI colors except on Windows where, by default, Windows APIs are used instead. Accessing these APIs from Jython is not possible, and as a result colors do not work with Jython on Windows.

This option supports the following case-insensitive values:

Colors are enabled when outputs are written into the console, but not when they are redirected into a file or elsewhere. This is the default.
Colors are used also when outputs are redirected. Does not work on Windows.
Same as on but uses ANSI colors also on Windows. Useful, for example, when redirecting output to a program that understands ANSI colors. New in Robot Framework 2.7.5.
Colors are disabled.


Prior to Robot Framework 2.9 this functionality was enabled with --monitorcolors option that was first deprecated and is nowadays removed. The short option -C works the same way in all versions.

Console markers

Starting from Robot Framework 2.7, special markers . (success) and F (failure) are shown on the console when using the verbose output and top level keywords in test cases end. The markers allow following the test execution in high level, and they are erased when test cases end.

Starting from Robot Framework 2.7.4, it is possible to configure when markers are used with --consolemarkers (-K) option. It supports the following case-insensitive values:

Markers are enabled when the standard output is written into the console, but not when it is redirected into a file or elsewhere. This is the default.
Markers are always used.
Markers are disabled.


Prior to Robot Framework 2.9 this functionality was enabled with --monitormarkers option that was first deprecated and is nowadays removed. The short option -K works the same way in all versions.

Setting listeners

Listeners_ can be used to monitor the test execution. When they are taken into use from the command line, they are specified using the --listener command line option. The value can either be a path to a listener or a listener name. See the `Listener interface`_ section for more details about importing listeners and using them in general.