Issue #22 August 2006

Using Dogtail to automate Frysk GUI tests

by Len DiMaggio

This is the third in a series of articles on the "Dogtail" automated GUI test framework. In the first two articles, we introduced Dogtail and then described its Python modules. In this article, we'll take a look at how Red Hat Development and QE Engineers are using Dogtail to create automated tests for the Frysk System Monitor and Debugger GUIs.

I have to admit that I've never been too fond of writing client GUI automation tests. I've spent most of my career working on the server side of things, where tests are written and executed from a command line interface using a test framework such as JUnit. Part of my problem stems from a series of bad experiences with various (non-open source) GUI test automation frameworks. Looking back, these bad experiences resemble a series of bad first dates. Things always started off well in that the demos looked interesting, but a few hours later, I was always looking for a way to just end things. In other words, when it came to breaking up the relationship with each of these test frameworks (and to paraphrase George Costanza), "it's not me, it's you!"

One of these situations involved my repeatedly crashing the test framework and having to make phone call after phone call to the vendor's software support team before someone finally admitted that, "yes, it does have a few memory leaks." Another situation involved my finding so many bugs in the test framework that the president of the company that created the framework wanted me to go work in their software QE department.

The chance to use the Dogtail framework, however, was appealing because of Dogtail's open source design, its use of accessibility technology, and the fact that it would give me a chance to not only contribute tests to a project, but also contribute to Dogtail itself.

The project for which we are using Dogtail to automate client GUI tests is the "Frysk" system monitor and debugger.

Frysk--The project

What's Frysk? The Frysk project website says it best. Frysk is: intelligent, distributed, always-on system monitoring and debugging tool that allows developers and system administrators to monitor running processes and threads (including creation and destruction events), monitor the use of locking primitives, expose deadlocks, gather data and debug any given process by either choosing it from a list or by accepting Frysk's offer to open a source code or other window on a process that is in the process of crashing or that has been misbehaving in certain user-definable ways...'

The Frysk project was started in 2005. In 2006, Frysk shipped as a technology preview in Red Hat Enterprise Linux 4, update 3 and 4.

Frysk enables users (software developers and sysadmins are the target user groups) to monitor the operation of single or groups of processes or process threads through the definition of debug sessions that specify which processes to be watched and "observers." An observer consists of a set of process-oriented rules and filters. These are similar to email rules and filters. When the state of a targeted process satisfies the selected rules and filters, Frysk "fires" the configured observer and executes the actions defined in the observer. For example, if a "fork" observer is configured for a process, the observer will fire.

Frysk supports these types of observers:

  • Fork
  • Exec
  • Task Termination
  • Task Exit
  • Syscall
  • Task Clone

In addition, Frysk enables users to create their own customized observers.

The project team members have taken unit testing seriously from the very start. There is a large library of unit tests to verify the "core" of Frysk. In late Spring of 2006, we were missing a set of automated tests to verify the operation of the Frysk GUI. We decided to use Dogtail as the tool with which to build these automated tests.

Frysk--The automation challenges

There are always challenges in creating automated tests for any GUI-based software. For example, it's frequently the case that the design of the GUI under test is often changed throughout a program's development and the automated tests have to be modified to stay in sync with those changes.

The Frysk project presented us with this "generic" automation challenge as well as some other challenges that were more unique to Frysk:

  • Rapid development of both the GUI and the underlying "core." One of the characteristics of the Frysk project that made the development of automated GUI tests challenging was the rapid rate at which the Frysk GUI and the Frysk "core" software was being developed. The design of all the Frysk software was also evolving in response to user input and usability analysis performed on the GUI. Strictly speaking, it would have been easier for GUI testing if we had waited to start the test development until the GUI was "done."
    The problems with this approach are that:
    • While the main design of the GUI would, at some point, be complete, the GUI will never really be done as it will be changed and improved in response to user input and contributions from the community.
    • Also, we wanted to build a set of GUI tests as early as possible in the project's life so that the tests could be used as part of the build verification test suite.
    • Finally, we wanted to not only use Dogtail to assist in the Frysk project's development, we wanted to use the Frysk project to assist in the Dogtail project's development. Which leads us to the next challenge...
  • Frysk and Dogtail: Testing a beta with a beta. When I mentioned our plan to use Dogtail to create automated tests for Frysk, someone joked to me, "Isn't that testing a beta with a beta?" I guess that they were technically right, but while Frysk is a relatively new project, it has been available as either as a RHEL technology preview or directly from the Frysk project website for more than six months. As for Dogtail, while it's also new, it's stable--and it really works! One drawback to using Dogtail is that while the source (Python) code is self-documenting, a complete user guide or tutorial had not been written. When I started using Dogtail on a more or less daily basis, I found myself taking lots of notes. These notes became the starting point of this series of articles.
  • Multiple learning curves. Finally, the QE staff (a.k.a. the author of this article) involved in designing and writing the automated tests had to deal with simultaneous learning curves in multiple areas.
    • First, I had to learn Python. Having several years of experience with Java made this easy, although I still find myself looking for the {} brackets!
    • Second, I had to learn Dogtail. As I said earlier, the Dogtail code is very well documented and includes some excellent online help. Writing these articles has also been a help in this regard. It's like that old saying, "you learn by teaching." By researching and writing the articles, I've learned more about Dogtail than if I only used it as an automation tool. Also, the Dogtail development team and community have been a great help.
    • Finally, I had to learn Frysk. Again, the Frysk development team and community have been a great help. In this context "community" is the exact right word. There's always someone available down the hall--or around the world--to answer questions.

Frysk automated GUI tests--Initial goals

In planning the GUI tests for Frysk our goals were to verify the operation of the GUI:

  • In response to user actions such as the definition of Frysk debug sessions and customized process observers.
  • In response to process or thread events, such as the firing of a process fork type observer.
  • In response to adverse (negative test, hostile or destructive user) situations.

Getting started

The first thing that we had to do was to ensure that each GUI node had its accessibility information exposed via the GNOME Glade user interface builder. Defining the accessibility information is an easy task, provided that you do it while you're developing the interface. If you leave it until after the interface is complete, however, it's a tedious task to go back and fill it in for every field, menu, combo-box, etc. Here's an illustration of how to define the accessibility information in Glade. What we're doing for the Frysk project is to simply use the GUI node name as the accessibility name.

Glade GUI node definition
Glade GUI node definition
Glade GUI node accessibility definition
Glade GUI node accessibility definition

Once we had the accessibility information exposed, we started thinking about which tests to write first.

As I mentioned earlier in the article, we wanted to build a set of GUI tests as early as possible in the project's life so that the tests could be used as part of the build verification test suite. But we did not want to build tests that would quickly be made obsolete by changes made to the GUI.

So we started by looking at the data on which the GUI operates. We chose this route because the design of this data was relatively stable. Any test programs that worked with this data would likely not have to change as the data design was also not likely to change. At least not too much. This data takes two forms: definitions of Frysk debug sessions and custom observers, both of which are written to files by Frysk.

A layer of abstraction

When a user creates a session or custom observer with Frysk, the data for that session is written into an XML file in the user's $HOME/.frysk directory tree. The Frysk GUI enables the user to retrieve these sessions and custom observers from their XML files to be used with Frysk.

We decided to detach the test program code that handled this test data from the code that would directly interact with the elements in the GUI. By building in this data abstraction layer first, we would be able to not only make progress in building automated tests, but also build a foundation for the tests that would (later) exercise the parts of the GUI that dealt with user and process interaction.

The approach that we decided on would use the actual session and custom observer files generated by Frysk as input to tests. In this way, we could manually create sessions and custom observers through the Frysk GUI and then "play them back" through Dogtail tests scripts. We could also edit the session and custom observer data files to create large and complex test configurations that would be tedious and error-prone to create manually through the GUI.

Let's walk through an example. Here's a sample session XML file--note how it specifies the processes to be watched by Frysk.

A sample test script is [1]. When this script runs, it performs the following actions:

  • The script first creates a log file where test-related information is written.
  • The script then invokes an XML parser ( to read an observer data file that is passed to the script as a command line argument.
  • The parser in turn interprets the XML data in the file and creates a corresponding observer object in memory.
  • This observer object is an instance of the observer class as defined in
  • This observer object in turn includes instances of observer action points (the actions for the observer to perform, such as logging an event) and filter points (the filters for the the observer, such as the names of the processes to monitor). These action and filter points are instances of the ObserverPoints class as defined in
  • Each of these actions and filters in turn is an instance of the class defined in
  • After the observer object has been created in memory, the script invokes the Frysk GUI and performs tests to verify that an observer that contains the characteristics defined in the original observer can be created and written to a new observer file.
  • Another observer object is then created in memory from this new file and is compared against the in-memory Observer created from the original test data file.
  • Finally, the script performs tests to query for, update, and then delete the newly created observer. Tests of this type are sometimes referred to as "CRUD" tests (for Create, Read, Update, and Delete).

And here's a demo of the script in operation.

Taking advantage... of a Dogtail advantage

One of the great features of Dogtail that sets it apart from other GUI automation frameworks is its ability to dynamically discover new or changed GUI nodes. Some GUI automation frameworks enable test scripts to access and manipulate GUI nodes by first creating a static "datastore" of these nodes and then by having tests read this datastore at runtime. A weakness in this approach is that if the GUI changes at runtime, the datastore won't know about the changes.

Dogtail, on the other hand, is able to discover these changes in the GUI as they occur.

This is important as the Frysk GUI does change dynamically. Here's an example. In the custom observer creation/edit dialog, the user is able to select "+" and "-" buttons to control the number of action and filter points they want in their custom observer. Here's an illustration:

FIXME image
FIXME image

Because Dogtail is able to discover these new action and filter points as they are created, our tests can in turn create very complex test data for custom observers.

No login required. Want to see your comments in print? Send a letter to the editor.

What's next?

As I'm writing this article, I'm also writing new Dogail test scripts for the full set of Frysk GUI windows and dialogs. Once we have at least one test script for each Frysk GUI, we'll start creating complex test scenarios to simulate real-world user actions.



The credit for the design and implementation for the Frysk project belongs to the members of the Frysk development team that is led by Elena Zannoni. The developers are: Andrew Cagney, Stan Cox, Mike Cvet, Adam Jocksch, Chris Moller, Rick Moseley, Phil Muldoon, Ivan Pantuyev, Nurdin Premji, Sami Wagiaalla, Mark Wielaard.

The success that we've had in building automated tests has been built on their contributions to building "testability" into Frysk, and to answering my sometimes seemingly endless stream of questions about Frysk's design and implementation.

The credit for the creation and development of Dogtail goes to the team of Red Hat associates (Zack Cerza, Ed Rousseau, and David Malcolm) who first brought it to life and the community that is making contributions toward its future.

About the Author

Len DiMaggio is a QE Engineer at Red Hat in Westford, Mass. (USA) and has published articles on software testing in Dr. Dobbs Journal, Software Development Magazine, IBM Developerworks, STQE, and other journals.