Issue #21 July 2006

Dogtail's Python Modules (and how to use them)

by Len DiMaggio


-->

This is the second in a series of articles on the Dogtail automated GUI test framework. In last month's article, we introduced Dogtail, described its capabilities, and walked through the steps required to install and configure the framework, and the process of building automated test scripts. In this article, we'll take a closer look at the Python modules that comprise Dogtail.

What is Dogtail? In last month's article, we answered that question by examining its use as a GUI automation framework. This month, we'll answer that question again, but in a slightly different way. I once worked with a software development manager who had a great way of reducing complex software design issues into simple questions. He would call a halt to the discussion and ask, "OK, where are the bits?"

In this month's article, we'll discuss the bits that comprise Dogtail. The goals of this article are to give you a better understanding of Dogtail's design, and of the features that each of its modules support so that you can make better use of those features, and even modify or extend them to fulfill your GUI test automation needs.

Ok, here's that question again. What is Dogtail? It's a set of Python modules. Now, let's look under the hood.

Under the hood

For the purposes of this discussion, we'll divide Dogtail into these two parts:

  • Part 1: The Core Framework - These modules contain the procedural API and the object oriented API, the test case classes, and generalized utilities
  • Part 2: The Helper Libraries - These modules make the testing and debugging of specific applications easier.

Let's look at the Core Framework first.

Dogtail Core Framework Modules

Global Configuration Settings - the config.py module

The first module that we'll examine is config.py. This module defines the configuration parameters that control the execution of Dogtail scripts. These parameters are global across your entire Dogtail installation. Whatever you define in config.py affects the execution of all your Dogtail scripts.

This module defines configuration parameters. Depending on the nature of application for which you are developing Dogtail test scripts, you may you may want to customize some or all of these. You set the values with statements such as:

>>>import os
>>>from dogtail.config import config
>>>dogtail.config.config.logDir = '/tmp'
Configuration parameter Parameter description
scratchDir, dataDir, 'logDir' Directories used by Dogtail. The default values are '/tmp/dogtail/', '/tmp/dogtail/data/', and '/tmp/dogtail/logs/', respectively. As you're reading this, make a mental note about 'logDir' as we'll revisit Dogtail logging when we dicsuss the 'tc.py' (test case comparison) module later in this article. The dataDir isn't used directly by Dogtail. It's available as a variable to your Dogtail scripts as a location for your own data files.
ensureSensitivity This (boolean) parameter controls whether Dogtail checks that GUI nodes (buttons, combo boxes, etc.) are sensitive, in other words, that they can be acted upon and are not "greyed out" before actually performing actions on them. If this parameter is set to True, Dogtail will raise an exception if an attempt is made to act on a greyed-out GUI node. This parameter can be set to False to serve as a workaround for applications and GUI toolkits that don't report sensitivity properly. The default value for this parameter is False.
searchBackoffDuration This (float) parameter defines the time in seconds for which to delay before retrying when a search for a GUI node fails. What types of problems could cause a search for a GUI node to ever fail? Well, apart from errors in test script authoring, it may be the case that the GUI may encounter an error that prevents it from displaying an node, or it may simply be the case that it takes the GUI too long to render and display all the expected GUI nodes. The default value for this parameter is (0.5) seconds.
searchWarningThreshold This (integer) parameter defines the number of search retries Dogtail will attempt before it starts logging the individual attempts. The default value for this parameter is (3). What this means is that Dogtail will only generate logging messages for any searches for GUI nodes after the first (2) search attempts fail.

Why not simply set this parameter's value to (0) and have all failed attempts logged? Because the first few attempts may fail, but not due to any problem in the application under test or on the system where you're running the tests, but because of the delay in the application under test's GUI generating and displaying the GUI node that your Dogtail test script is looking for.

searchCutoffCount This (integer) parameter defines the number of times Dogtail will retry when a search for a GUI node The default value for this parameter is (20). This means that by default, Dogtail's (20) retry attempts will take up to (10) seconds as the 'searchBackoffDuration' parameter has a default value of (0.5) seconds if the target GUI node cannot be found.
debugSearching This (boolean) parameters defines whether Dogtail will to write info on search backoffs and retries to the debug log. The debug value for this parameter is False.
debugSleep This (boolean) parameter defines whether Dogtail will to write info sleep statements in test scripts to the debug log. The debug value for this parameter is False.
defaultDelay This (float) parameter defines the time in seconds that Dogtail will delay when sleeping. The default value for this parameter is (0.5) seconds.
absoluteNodePaths This (boolean) parameter defines whether Dogtail will include long (i.e., "absolute" or fully qualified) GUI node names in logging messages. The default value for this parameter is False. Setting this parameter to True can be useful in debugging test scripts.
debugSearchPaths This (boolean) parameter defines whether Dogtail writes out debug information when it invokes its SearchPath routines. We'll discuss these routines later on in this article.
'actionDelay': 1.0 This (integer) parameter defines the delay after an action is executed. Why is this delay important? It may be the case that the GUI under test includes elements that are created, or have values filled dynamically. For example, filling in one field in a GUI with a specific value may cause other dependent fields to also be filled in with default values. If this actionDelay did not exist, Dogtail might attempt to access these dependent fields before the GUI is able to fill in their values. The result would be that the GUI would overwrite the values entered by Dogtail.
'runInterval': 0.5 This (integer) defines the interval at which dogtail.utils.run() and dogtail.procedural.run() check to see if the application has started up.
'runTimeout': 30 This (integer) parameter defines the timeout after which dogtail.utils.run() and dogtail.procedural.run() give up on looking for the newly-started application.
debugTranslation This (boolean) parameter defines whether Dogtail will write out debug information from the translation/i18n subsystem. The default value for this parameter is False. We'll discuss how Dogtail can be used to create I18N tests when we examine the i18n.py module later in this article.
blinkOnActions This (boolean) parameter defines whether Dogtail blinks a rectangle around a GUI node when an action is performed on it. The default value for this parameter is false. It's very helpful to set this to true when you're debugging a test script or giving a demo as it highlights the actions being performed by the test script as they are happening.

OK, how much customizing of these parameters should you really do?

My suggestion is to accept the defaults at first, and see how they handle the characteristics of the application that you're testing. You may find that the application requires more time to display some GUI nodes. One of the challenges in automating GUI tests is that your test scripts sometimes have to wait for the GUI to display some GUI nodes. These delays can happen in some places that you might not expect. Here's an example that you can try out manually. Let's imagine that your test script has to locate a file in a large directory such as /usr/bin through a file browser GUI. Simple right? Try this yourself manually with the GNOME "File Browser" GUI. The /usr/bin directory is so large that I had to wait over (15) seconds for it to read and display the contents of the directory.

There's one more point we need to cover about the logging and data directories. If you have multiple people use Dogtail on the same system, then you may want to write the log and data files to different directories. Changing the directory settings can be done at run-time in your Dogtail scripts with these statements:

>>>import os
>>>from dogtail.config import config
>>>config.scratchDir = os.path.join(os.environ['HOME'], 'dogtail')
>>>config.logDir = os.path.join(config.scratchDir, 'logs')
>>>config.dataDir = os.path.join(config.scratchDir, 'data')

This will put everything in ~/dogtail/ . It also creates the directories.

Distribution Specific Packaging - the distro.py module

Linux is Linux, right? Well, sort of. You may find slight differences in how your application under test is packaged or installed on different Linux distributions. These differences could affect how your Dogail scripts will run on each distribution. Luckily, you don't have to handle all these differences yourself, the distro module provides support for running Dogtail scripts on multiple distributions.

This module makes it possible for you to not only work with distributions such as Red Hat or Fedora, but also with tinderbox servers of specific modules built with JHBuild1. The distro module makes this possible by abstracting away the packaging layer (RPMs for Red Hat and Fedora, apt for Debian, etc.) and letting you actually query the software under test for file, version, and dependency information.

What sorts of problems can the distro module help you with? Glad you asked:

  • Version - There are various ways to determine the version of an application. Most applications include a -version command line parameter, while for others, you may have to invoke an API call. You can always just search through the application's RPM file, but why re-invent the wheel? The distro module can do this for you and your test scripts can simply make the getVersion() call.
  • Files - The getFiles() function returns a Python List object that contains the names of the application's files that are included its RPM. If part of your automated test plan is to verify that the installation of the application under test is complete, then this function can be a big help.
  • Dependencies - The getDependencies() returns a Python List object that contains the names of the application's dependencies. If you want your tests to be robust and be able to accurately track problems caused by system mis-configurations, then this function can be a big help.

Here's an example of some of the information that is available through this module. Suppose we're testing the GNOME "gedit" text editor and want to make sure that the tests that you will be writing will be able to verify that they are accessing the correct version of gedit and that all of the packages on which gedit depends are installed. Your test scripts can access this information though these function calls in the distro module:

$ python
Python 2.3.4 (#1, Feb  6 2006, 10:38:46) 
[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import dogtail
>>> from dogtail import distro
Detecting distribution: Red Hat/Fedora/derived distribution
>>> print dogtail.distro.packageDb.getVersion('gedit')
2.8.1
>>> print dogtail.distro.packageDb.getDependencies('gedit')
['libxml2', 'popt', 'scrollkeeper', 'libart_lgpl', 'libglade2', 'gnome-vfs2', 'GConf2', 'glibc', 'ORBit2', 'libgnomeprintui22', 'xorg-x11-libs', 'libgnome', 'libgnomeprint22', 'gtk2', 'gail', 'pango', 'libbonoboui', 'eel2', 'aspell', 'libbonobo', 'zlib', 'atk', 'libgnomeui', 'libgnomecanvas', 'glib2', 'gtksourceview']

Providing Globalization Support - the i18n.py module

The i18n module makes use of the GNU Translation Project's "gettext2" utilities. In a nutshell, these utilities enable programs to make use of externalized strings for translating program output into multiple languages. The program's strings are stored external to the program in "PO" (portable object) files that can be read and edited by people, and "MO" (machine object) files that are binary.

For Dogtail scripts, this means that you can take advantage of the i18n module's functions to make use of gettext utilities. The i18n module includes these classes and functions:

  • class GettextTranslationDb - This is an implementation of a database of translations, which leverages gettext, using a single translation MO file.
  • class TranslatableString - This class represents a string that we want to match strings against, handling translation for us, by looking it up once at construction time.
  • function isMoFile - To determine if the file in question is a gettextMO file
  • function getMoFilesForPackage - This function looks up the named package and finds all gettext MO files within it and its dependent packages
  • function loadTranslationsFromPackageMoFiles: This function appends all of the gettext translation MO files used by the package (and its dependendent packages) to the translation database list. In other words, this function enables you to make use of the application under test's own translations. Here's an example, if you run the example gedit test script with this command:
    	LANG=ja_JP.UTF-8 ./gedit-test-utf8-procedural-api.py
    	

Note that loadTranslationsFromPackageMoFiles is the only function that you need to explicitly call in your Dogtail scripts.

Then the gedit application will be run with a Japanese character set, but the Dogtail script, that was written with gedit running with an English character set can still run. Here's an illustration:

Note that the debug log written to stdout will show the Japanese characters. For example:

Managing Test Results and Debug Logging - the logging.py module

Here's an all too typical exchange between QE and Development engineers:

QE'er (with enthusiasm): Hey, I found a bug. The test failed and crashed the application.
Developer (with resignation): Great--can you tell me what led up to the crash?
QE'er (with much less enthusiasm): Er... I'll try it again and get back to you.

(This is definitely not the kind of audit control that makes Sarbanes-Oxley complicance officers happy.)

One of the great advantages of automated tests is that they can be run unattended. However, you always need to have the tests log information so that you can (1) confirm that tests that pass are actually doing the right things and (2) debug test failures.

Luckily, you don't have to create your own logging system. You can use the Dogtail logging module. There are actually two different loggers that you can use; the log writer and the debug logger.

The log writer

The simplest way to use the log writer is instantiate a "writer" object, and then write to it. This object includes attributes that specify the log file name, the log file directory, and the script name. The function that you use to write messages to a writer is "writeResult." This function accepts a dictionary object as its input parameter, so it's easy to divide your log messages into a label or prefix and a message. For example, these statements:

>>>import dogtail
>>>from dogtail import logging
>>>writer = LogWriter()
>>>writer.writeResult ({'INFO' :  'test script: ' + writer.scriptName + ' starting'  })

Will create a log file that looks like this:

##### TestScriptName Created on: 09 Jun 2006 11:13:59
2006.06.09 11:13:59     INFO:   test script: TestScriptName starting

There is, however, a more effective way to use a log writer and this is to connect its use with the testcase module (tc.py). We'll discuss the testcase later on in the article. For now, we'll highlight a couple of its notable features, as they relate to logging. The following code fragment is similar to the previous example in how it makes use of the "writeResult" function to write informative information to a log file. Its primary use, however, is to provide the testcase module with a means to report test pass/fail results. These statements:

>>>import dogtail.tc
>>>self.TestString=dogtail.tc.TCString()
>>>self.theLogWriter = self.TestString.writer
>>>self.theLogWriter.writeResult({'INFO' :  'test script: ' + self.theLogWriter.scriptName + ' starting'  })
>>>self.TestString.compare('TestLicense.py', licenseText.text, expectedLicenseString)
>>>self.theLogWriter.writeResult({'INFO' :  'test script: ' + self.theLogWriter.scriptName + ' ending'  })

Will create a log file that looks like this:

##### TestLicense Created on: 09 Jun 2006 11:13:59
2006.06.09 11:13:59     INFO:   test script: TestLicense starting
2006.06.09 11:14:06     TestLicense.py: Passed
2006.06.09 11:14:07     INFO:   test script: TestLicense ending

The debug logger

In contrast, the debug logger writes to standard output. As its name implies, this is how you can display information to help you debug and trace through your dogtail scripts. For example, these statements:

>>> import dogtail.logging
>>> from dogtail.logging import debugLogger as logger
>>> logger.log('some sample text')

Will display this text on standard output:

>>>some sample text

But wait, there's actually one more logger. This is the "IconLogger." It mirrors the output of DebugLogger as a tooltip of a notification area icon. What this means is that at the same time as DebugLogger is printing a statement, the IconLogger sets the tooltip of the Dogtail icon in the notification area to that same string. So if your terminal is hidden, you can still see what Dogtail's attempting to do.

Here's an illustration taken from a Dogtail script that tests the Frysk3 system monitor and debugger. Note that the last executed Dogtail statement was to click on the 'Open' button. You can see this statement both in the debug log in the terminal window on the left of the display and the tool tip displayed by the IconLogger in the panel at the upper right of the display.

Enabling the Debug Logger to Handle Accessibility Identification - the path.py modul

Like other types of programming, there are usually multiple ways to reach the same goal in a dogtail script. This is especially true when it comes to finding and identifying GUI nodes. The path module instantiates a "SearchPath" class that dogtail uses in its the recording framework and for verbose script logging. This class provides a means to find accessible technology information contained (wrapped) in GUI nodes, by starting at a root element and performing a recursive search.

It's not likely that you'll access this module directly when you write dogtail scripts. You will, however, use it indirectly when you perform searches for GUI nodes based on predicates. Which brings us to the predicate module.

Making Searches Easier - the predicate.py module

The tree API provides a dump() function that displays an indented listing of the GUI nodes in the object that you're examining. You can walk down that inverted tree from the root to each of its children to locate the GUI nodes that you want to examine. The syntax of a statement to access the child of a node is:

targetNode = parentNode.child(name='the name' , roleName='the type of node', description='descriptive text', otherArguments...)

So, to access a button in a dialog box, you might use a statement such as this:

theButton = parentNode.child(name='Quit', roleName='button', description='press to quit')

Now, this is functional, but it's also wordy and a pain in the neck to type. But, don't panic as the predicate module makes it possible for you to achieve the same results with this much simpler, and more readable, statement:

theButton = parentNode.button('Quit')

The predicate module supports searches for these types of GUI nodes:

  • applications
  • windows
  • dialogs
  • menus
  • menu items
  • text entries
  • buttons
  • tabbed panels

Handling Raw Input by AT-SPI Event Generation - the rawinput.py module

The modules that we've examined so far all manipulate GUI nodes through their accessibility technology information. In some cases, however, you will want to exercise the GUI under test not by performing an action through a GUI node, but rather through the GNOME window manager directly. For example, you may want to type some text into a editor, or drag an object from one point to another, or select an object by double-clicking it.

For these cases, Dogtail provides you with the rawinput module. This module supports these types of input:

  • click
  • double-click
  • press
  • release
  • absolute motion
  • relative motion
  • drag (from point to point)
  • type text
  • press key

Note that Dogtail is not terribly smart about where it directs this raw input. It always sends the input to the topmost window on the desktop. The best description of this module that I've ever heard is that it "injects the input into the topmost window." Sort of the way a hammer "injects" a nail into a board.

Performing test results comparisons - the tc.py module

Automated software tests can take many forms, but at heart, they each have to perform the same basic function; to compare an expected results with the actual results the test observed.

Which brings us to the "tc" module. This module provides classes and functions to to enable you to perform comparisons on string, image (screen shot), and numeric test results. Each of these classes implements a "compare" function. You use this function to determine if the test was successful:

  • TCString Class - In this class, the compare function compares 2 strings to see if they are the same. You can specify the encoding to which the two strings are to be normalized for the comparison.
  • TCImage Class - In this class, the compare function calls ImageMagick's "compare" program. Default compares are based on size but metric based comparisons are also supported with a threshold determining pass/fail criteria.
  • TCNumber Class - In this class, the compare function compares 2 numbers to see if they are the same. You can specify how to normalize mixed type comparisons via the type argument. The function supports these numeric types: "int", "long", "float", "complex", "oct", "hex"

Now, if you think that these comparison functions sound a bit like the "assert" functions in pyunit4, you'd be right for the TCString and TCNumber class as they perform fairly simple string and number comparisons. (Pyuit does not include image-comparison specific functions.)

So, which should you use? Pyunit's assert functions or the tc module's comparison functions? We'll examine this question and make some recommendations in the section of this article that describes the procedural API module. The answer to the question really depends on your own test automation programming experience, and on the specific goals of your test automation project. But don't worry, regardless of your own situation, Dogtail won't lock you into only one possible solution. Yes, choice is a good thing. ;-)

Utilities to support testing - the utils.py module

And, just to make your life in test automation a bit easier still, the "utils" module provides these functions.

  • screenshot - This function calls the ImageMagick import command to take a screenshot that you can use for comparing test results. The function accepts a file name argument that can be specified as 'foo', 'foo.png', or using any other extension that ImageMagick supports. (PNG is the default.) Also, by default, screenshot file names are in the format of foo_YYYYMMDD-hhmmss.png. The function's timeStamp argument may be set to False to create filenames that do not include YYYMMMDD.
  • run - This function lets you run an application. Note that if you want to execute a simple command such as "rm text.txt," you should use the Python functions os.popen() or os.system(). The run function keeps track of if the application being run experiences a timeout. The "dumb" argument to the function controls how Dogtail monitors the application timing out. If this argument is omitted or set to False, Dogtail polls at interval seconds until the application is finished starting, or until timeout is reached. If the argument is set to True, Dogtail returns when a timeout is reached.
  • doDelay - This function inserts a delay between Dogtail statements. By default, the delay is defined in the "config" module, but you can also supply a delay value in seconds in an argument to the function.

Dogtail Object Oriented API - the tree.py module

The "tree" module contains the object oriented Dogtail API. In this API, the GUI under test is addressed as a hierarchical data model. The tree module describes this model in these two classes:

  • The Action class represents an action that the accessibility layer exports as "performable" on a specific node, such as clicking on it. This class is a wrapper around the AT-SPI AccessibleAction object.
  • The primary class in the tree module is the "Node" class. Each element in the GNOME GUI Desktop, including your application under test, is a node. The nodes are organized into a tree, starting with a root at the top. Every GUI element for which an application correctly exports accessibility information is represented as a node in this tree. The applications are the children of this top-level root node. The windows and dialogs that comprise each application are the children of the application nodes.

    The Node class implements functions to enable Dogtail scripts to traverse and interact with the GUI nodes in an application by functioning as a wrapper around AT-SPI layer objects (i.e., "Accessibles") to make the accessibility information available. Your Dogtail scripts invoke these functions indirectly by reading and writing these attributes:

Attribute Name Attribute Type Attribute Description
name read-only string Wraps Accessible_getName on the Node's underlying Accessible data
roleName read-only string Wraps Accessible_getRoleName on the Node's underlying Accessible data
role read-only AT-SPI role enumerated type Wraps Accessible_getRole on the Node's underlying Accessible data
description read-only string Wraps Accessible_getDescription on the Node's underlying Accessible data
parent read-only Node instance A Node instance wrapping the parent, or None. Wraps Accessible_getParent
children read-only list of Node instances The children of this node, wrapping getChildCount and getChildAtIndex
text

passwordText

string

write-only string

For instances wrapping AccessibleText, this is the text and is read-only, unless the instance wraps an AccessibleEditableText. In this case, you can write values to the attribute. This will get logged in the debug log, and a delay will be added. After the delay, the content of the node will be checked to ensure that it has the expected value. If it does not, an exception will be raised.

This does not work for password dialogs (since all we get back are * characters). In this case, set the passwordText attribute instead.

caretOffset read/write int For instances wrapping AccessibleText, the location of the text caret, this is expressed as an offset in characters.
combovalue write-only string For combo-boxes. You write to this attribute to set the combo-box to the given value, with appropriate delays and logging.
stateSet read-only StateSet instance Wraps Accessible_getStateSet; a set of boolean state flags
relations read-only list of AT-SPI Relation instances Wraps Accessible_getRelationSet
labellee read-only list of Node instances The node(s) for which this node is a label. This list is generated from "relations."
labeller read-only list of Node instances The node(s) that is/are a label for this node. This list is also generated from "relations." The most common example of a labellee and its labeller counterpart are the labels that are associated with fields in a form. For example, a "First Name" labeller would correspond to the field that contains the "firstName" variable value.
sensitive read-only boolean Indicated whether the node is sensitive (i.e., not greyed-out in the GUI display). This is generated from stateSet and is based on presence of the AT-SPI SPI_STATE_SENSITIVE attribute. Note that not all applications set this up correctly.
showing read-only boolean This is generated from "stateSet" based on presence of the AT-SPI SPI_STATE_SHOWING attribute.
actions read-only list of Action instances This list is generated from the AT-SPI Accessible_getAction and AccessibleAction_getNActions functions. For each action that is supported by a specific node, an action function is enabled. The actions defined for a node will be specific to that node.
extents read-only Python tuple For instances wrapping a Component, the (x,y,w,h) screen extents of the component.
position read-only Python tuple For instances wrapping a Component, the (x,y) screen position of the component.
size read-only Python tuple For instances wrapping a Component, the (w,h) screen size of the component.
grabFocus   For instances wrapping a Component, attempt to set the keyboard input focus to that Node.
toolkit read-only string For instances wrapping an application, the name of the toolkit/application.
version read-only string For instances wrapping an application, the application version.
ID read-only string For instances wrapping an application, the application ID #, as defined by the AT-SPI layer.

The tree API also provides a "dump" function that enables you to view the tree of nodes that are present under any nod

All GUI automation test tools share a common requirement--they have to make it possible for test scripts to access and manipulate the discrete nodes that comprise the GUI. Originally, GUI test tools relied on accessing a GUI's nodes by means of the X and Y axis coordinates of each node. This approach was always problematic as those coordinates would change if a GUI's window was resized or its nodes were rearranged. Another, more flexible approach is for the GUI test tool to "scrape" the GUI and generate a data store that represents the nodes in the GUI. This approach handles situations where the GUI windows are resized or GUI nodes are rearranged as it deals with the nodes as objects, regardless of the X/Y coordinates of where the GUI node happens to be displayed. This approach has its problems too, however, as the datastore is static and can get out of synch with the current state of the GUI.

In contrast, Dogtail makes use of the GUI nodes' accessibility information as presented by the application under test. This information is dynamic as it represents the current state of the application's GUI. Dogtail makes this information available to test scripts (and test script programmers) through its APIs. The object oriented API allows test scripts to access and manipulate GUI nodes through a hirearchical tree structure.

The tree.node.dump() function provides an easy way to examine this tree structure.

Let's start with a small example. We'll carry on from the previous example and look at the GNOME "gedit" text editor. If we want to build test scripts for gedit, we're going to need information on the nodes in the gedit GUI. One way to get a complete hierarchical list of these nodes names and types is to use the dump() function. (Don't worry about the other modules that we import in this example. We'll discuss them later on in the article.)

$ python
Python 2.3.4 (#1, Feb  6 2006, 10:38:46) 
[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>from dogtail import tree
gedit = tree.root.application ('gedit')
>>> gedit.dump()
{"gedit" application}
 Node roleName='frame' name='*Unsaved Document 1 - gedit' description=''
  Node roleName='filler' name='' description=''
   Node roleName='menu bar' name='' description=''
    Node roleName='menu' name='File' description='' text='File'
     click
(Lots more output follows)

Let's take a closer look at the output from the dump() function. Each GUI node has a name, a role name, and a description attribute. Some node also have attributes such as "text." Your test scripts will make use of these attributes to access and manipulate each node.

If the application under test has correctly defined its GUI's accessibility information, then you should be able to uniquely identify each GUI node through a combination of its attributes. If you can't, then you've found a bug in the application. That's right. The absence of this information will not only affect your ability to write Dogtail test scripts, it will also affect users trying to utilize the application through its accessibility information.

Let's say that the first action that we want a test script to do is to open an existing file to be edited. In the above dump() output we find these lines:

    Node roleName='menu' name='File' description='' text='File'
     click

This tells us the information that we need. The first line lists the type (roleName='menu') of the GUI node, and its name (name='File'). The second following lines lists the actions that we can perform on that node. Note that while the various types of GUI nodes support different actions--for example, buttons support "press" and "release" actions in addition to "click"--all GUI nodes support the "blink" function. This action enables you to visually confirm that you can access GUI nodes and is very helpful in debugging problems.

There's one more dump option that you might want to consider using. By default, output from dump is in the text hierarchical/indented form shown in the above example. You can also display dump output in XML by executing this commands:

>>> gedit.dump('xml')

Dogtail Procedural API - the procedural.py module

The object oriented API supported through the tree module makes fine-grained control over the application under test possible in that your Dogtail scripts can precisely access any discrete GUI node. This fine-grained approach, can be tedious to work with, and may be overkill for some test scripts. The Procedural API implemented in the procedural module, in contrast, makes it possible for you to write simpler Dogtail scripts easier.

The major differences between the two APIs are that while the object oriented API operates at an individual GUI node level, the procedural API operates by having your Dogtail scripts traverse the application under test's GUI by obtaining focus on the GUI windows, dialogs, and elements, based on the elements' names.

For example, in the object oriented API, to click on the gedit text editor, you'd use these statements:

>>>run('gedit')
>>>gedit = tree.root.application('gedit')
>>>savebutton = gedit.button('Save')
>>>savebutton.click()

While, in the procedural API, you;d use these statements:

>>>run('gedit')
>>>focus.application('gedit')
>>>click('Save')

Ok, so which API should you use?

Remember how in our discussion of the Test Case (tc) module, we talked about how Dogtail provided you with choices between Pyunit's assert functions and the the tc modules comparison functions? The answer to the question about which API you should use ties in with that choice. Dogtail was intentionally designed to support use by both people with varying levels of experience with the Python language and with test automation frameworks and test automation in general.

So, the answer really is; it depends.

If you have experience in Python programming and with test automation, and you want to maintain fine grained control over the test, then what makes the most sense is for you to use Pyunit as a test framework for your Dogtail tests, use the Pyunit ‚assert functions wherever possible, and use the Object Oriented API to write the tests.

If, however, you're just starting out with Python and/or test automation, and you want to put together a basic set of tests, then what makes the most sense is for you to start by using the tc module's compare functions, and the Procedural API. Using these tools will get you up and running quicker, and you can always convert your tests to use Pyunit and the Object Oriented API when you acquire more experience.

Dogtail helper libraries

There's a wonderful Dilbert™ cartoon from a few years ago where Wally amazes his co-workers by demonstrating a rare skill; code reuse. Well, after coming up to speed on Dogtail and building test scripts, you will likely find that your tests need to interact with other commonly used applications such as the nautilus file manager, or the gedit text editor. The Dogtail "helper libraries" are intended to make these tasks easier by providing you with functions to support your use of these applications.

As of this writing, draft application helpers exist--in varying stages of development--for these applications:

  • epiphany
  • evolution
  • gcalctool
  • gedit
  • gnomepanel
  • kicker
  • konqueror
  • mozilla
  • nautilus
  • yelp

Note that as of this writing, the Dogtail team is considering moving these application helpers out of Dogtail and into a separate module. The goal of this move will be to enable Dogtail to remain relatively stable while the application helpers change to adapt to changes in the applications. Stay tuned for more news on this move in later articles in this series.

Let's continue with our example use of the gedit text editor and take a look at the functions included in the gedit helper library. The library currently includes these functions:

  • setText
  • getText
  • openLocation
  • saveAs
  • def printPreview

Each of these functions simply includes the Dogtail/Python statements that you would have to write in your test scripts to perform the supported tasks. For example, the "openLocation" function, looks like this:

        def openLocation(self, uri):
                menuItem = self.menu("File").menuItem("Open Location...")
                menuItem.click()
                dlg = self.dialog('Open Location')
                dlg.child(roleName = 'text').text = uri
                dlg.button('Open').click()
No login required. Want to see your comments in print? Send a letter to the editor.

What's next?

In the next installments of this series, we'll take a look at how we're using Dogtail to develop automated GUI tests for the Frysk system monitor and debugger, we'll examine the Object Oriented and Procedural APIs in greater detail, and walk through using Dogtail's headless mode and the record and playback tool.

References