Getting started

QVision test cases and automation tasks can be developed with the Pace Console and then moved to Pace for production. The test cases and automation tasks themselves are usually executed using the Robot Framework. Familiarity with the Robot Framework is beneficial.

From now on we use the phrase “automation scripts” to refer to both test cases and automation tasks. We also assume the user has installed Pace Console with the Computer Vision enabled.

Running your first automation script

When you have Pace Console installed, you should see a Qentinel Pace Console icon on the desktop. Open Pace Console by double clicking the icon. This will open a console that has everything configured to run automation scripts.

The next step is to create your first automation script and run it. Below you can find an example script that opens WordPad and does a couple of actions on it (can you guess what by looking at the script?). Let’s try it out.

*** Settings ***
Library    QVision

*** Test Cases ***
WordPad Test Case
    OpenApplication    WordPad
    ActivateWindow     WordPad

    WriteText          Hello world!
    ClickText          Find
    VerifyText         Find what

Copy the above script to your favourite editor (or Notepad) and save it to the folder C:\automation (the default Pace Console folder) with a name example.robot. Now Open Pace Console and type:

robot example.robot

After pressing the “Enter” key you should see your automation script opening WordPad, activating the WordPad window, and then writing something to it. Finally it clicks the “Find” button and verifies that it can find the “Find what” text.

Currently QVision is partly dependent on the resolution and scale settings of Windows. If you encounter problems with the example above, please set your Windows resolution to 1920 x 1080 with either 100% or 125% scale (see Troubleshooting).

Developing automation scripts

In the previous section we run our first automation script. In this section we develop that script forward with a way that will likely be beneficial in an actual automation development task.

A good way to develop automation scripts is to build them step by step. We first make a small increment or a step, observe that it does what we intended, and then add it to the automation script. This helps building long lasting automation scripts that would be difficult to develop at one go. Let’s try this in practice.

Open the example.robot file that you created in the previous section. We wish to complete the activity that tries to find text on the document. First ensure you have the WordPad application open in the background as it was left after the previous run (i.e. containing the text “Hello world!” and having the “Find” dialog visible). If not, just execute the script again or manually bring WordPad to that state.

Now in the editor, add a new test case called Step after the “WordPad Test Case” with the following contents:

*** Test Cases ***
WordPad Test Case
    # as before...

# A temporary test case for development
Step
    # Start by activating the "Find" dialog
    ActivateWindow    Find

    # Next steps for finding text "Hello"
    WriteText         Hello
    ClickText         Find Next
    ClickText         Cancel

Here in the “Step” test case we aim to find the text “Hello” with the help of the “Find” -dialog. Now having WordPad open in the correct state, execute just the “Step” case with:

robot -t Step example.robot

Running this should now

  1. Write “Hello” in the “Find” dialog

  2. Click the “Find Next” button

  3. Click the “Cancel” button to close the dialog

If the above succeeded you should be left to a state where the “Hello” word is highlighted. If this is the case, add the step to the actual test case:

*** Settings ***
Library    QVision

*** Test Cases ***
WordPad Test Case
    OpenApplication    WordPad
    ActivateWindow     WordPad

    WriteText          Hello world!
    ClickText          Find
    VerifyText         Find what
    WriteText          Hello
    ClickText          Find Next
    ClickText          Cancel

To check that the complete test case functions you can run it with its name:

robot -t "WordPad Test Case" example.robot

If you removed the “Step” test case from the robot file, you can simply write:

robot example.robot

as before.

Reading values

The previous script will leave us in a state where we have highlighted the word “Hello” in WordPad. Our next job is to verify that it actually says “Hello”. Here we already know it does, but in many cases you will have to read field or texts whose content you don’t know beforehand (e.g. id of an item).

For such cases we want to use methods that get the value directly from the UI and not through OCR. This is because OCR can make mistakes in reading text, and those can be fatal if we would for instance delete or modify a record with an id that had a mistake in it.

For extracting text directly we can use the GetText keyword that uses a copy action internally. Again leave WordPad open in the state where we have the “Hello” word highlighted. Let’s add a new step:

*** Test Cases ***
WordPad Test Case
    # as before...

Step
    # Start by activating the "WordPad" window
    ActivateWindow    WordPad

    # Read the highlighted value
    ${text}=          GetText
    # Verify the value using the Robot Framework syntax
    ShouldBeEqualAsStrings    ${text}    Hello

Run the step as before:

robot -t Step example.robot

If the verification succeeds you can add it to your actual test case as before.

Using offsets

Often you need to click elements that don’t contain any visible text, such as icons or input fields. The traditional way to click these elements is to use icons. Sometimes however these elements are described by a nearby text that can be used to click the object using offsets.

Offsets allow interacting with elements near a text:

ClickText    View    below=2.5

The above would click two and a half character heights below the text “View”. The character height is measured from the given text “View” on the screen. Try it out to see what the above line does with WordPad.

Other offset parameters are above, left and right. Left and right offsets use character widths for the offset distance.

Final task

To wrap things up, try to close WordPad at the end of the test using the “File” menu. After you have succeeded, you can compare your script to the one below:

*** Settings ***
    Library    QVision

*** Test Cases ***
WordPad Test Case
    # Setup WordPad
    OpenApplication    WordPad
    ActivateWindow     WordPad

    # Test case steps
    WriteText          Hello world!
    ClickText          Find
    VerifyText         Find what
    WriteText          Hello
    ClickText          Find Next
    ClickText          Cancel
    ${text}=           GetText
    ShouldBeEqualAsStrings    ${text}    Hello

    # Close WordPad
    ClickText          File
    ClickText          Exit
    ClickText          Don't Save

Now feel free to play around with WordPad or some other application of your choice. It is also a good idea to take a look at the example test cases in the C:\automation folder.

Debugging

During development you will encounter situations where it is helpful to see what QVision has actually seen. When the script encounters an error, QVision will add the latest screenshot with annotations into the log.html (produced by running robot). The annotated image looks like this in the log.html file:

Annotated image

You can also manually add this image to the log file by adding a line:

LogScreenshot    annotated

to your automation script. Go ahead and add the above line to your test case and open the log.html file to find the annotated screenshot.

Debug flag

It may also be beneficial to enable the Robot Framework debug flag. This will print more information to the log.html file aimed at helping to identify the problem. The debug flag can be added with:

robot -L DEBUG example.robot

Common PaceWords in QVision

In the previous sections, you encountered PaceWords such as WriteText, ClickText and VerifyText. Below you can find common PaceWords with short descriptions:

*** Test Cases ***

Verification PaceWords
    # Find text on the screen
    VerifyText       Some text            [nearby "anchor" text]
    # Find image on the screen. Leave out the suffix .png
    VerifyIcon       image_name

    # Same as above but returning a boolean without failing the test case
    IsText           Some text
    IsIcon           image_name

Input PaceWords
    ClickText        Text to be clicked   [nearby "anchor" text]
    ClickIcon        image_name

    # Type text to the nearest text field (under development)
    TypeText         field_label          text_to_insert        [anchor for label]

    # Type text alternative that clicks 5 characters to the right from label
    ClickText        field_label          [anchor for label]    right=5
    WriteText        text_to_insert

Debugging
    # Adds an annotated screenshot to the log.html file.
    # Useful for understanding what QVision sees at specific point.
    LogScreenshot    annotated

Window Handling
    # These keywords currently work only in Windows
    ActivateWindow          header of window
    MaximizeWindow          [header of window]
    MinimizeWindow          [header of window]
    NormalizeWindow         [header of window]
    CloseWindow             [header of window]

Settings
    # Adjust how long keywords are retried before allowing them to fail
    SetConfig               default_timeout          10s

    # The path to the folder where you have the image files to search for
    SetReferenceFolder      path_to_directory

    # Uses only the active window for taking the screenshot
    EnableFocusedWindows
    # Uses the whole monitor for taking the screenshot
    DisableFocusedWindows

Running in production

It is a good practice to use the same screen resolution and scaling factor in the development environment and the production environment. Screen size often affects the screen layout. For instance some elements that are visible with a larger resolution may be visible only after scrolling with a smaller resolution.

Also the detections are affected by the resolution settings, which may cause failed detections in the production environment.

Therefore always use the same settings in the development environment as in the production environment.