|STAQS||Software Testing and Quality Services|
|[ Home | Docs > ET & Session-Based Test Management|
0. Quick Overview
1. The SBTM Scan Tools - Now in Ruby!
2. SBTM Critical Success Factor - Senior Required for Test Lead
3. SBTM Critical Success Factor - Debrief Every Day
4. Writing Good Test Notes Takes Time and Practice
5. ET.XLS Tips
6. Testing History at Your Fingertips
7. Mine Your Data - Create Test Guides
What is SBTM? It is a way of managing your Testing effort that is different from how you probably learned to do it. Here are some links to provide you with some background information:
According to the Satisfice SBTM home page, Session-Based Test Management is "a method for measuring and managing exploratory testing." In a nutshell, it is a Test Management Framework. It is one of many that you can choose to use and apply in your particular situation. I happen to like it.
After working with SBTM for a few years, I tried to customise the scripts and templates that I downloaded from the Satisfice web site. I had difficulty deciphering Perl, so I ported the scripts to Ruby in my spare time.
Why did I do this? Primarily because we use RUBY and WATIR for scripting/automation on our team, not Perl. We all have Ruby on our desktops and we use it all the time. No one in our company really knows or uses Perl and the only reason we have it on our computers is to run the SBTM Scan tool scripts. Now that the scripts are in RUBY, we've customised both the templates and tools to better suit our needs.
The initial v1.x release of the Ruby SBTM scripts are freely available for your use if you want it. I've made many changes to the scripts since the initial port, and I would be interested to hear from others to let me know if you find them useful or if you have made any other changes to them as well.
I also figured out how to change the date format in the ET.XLS file so that it supports DMY format (common in Canada, UK and other countries). If you are interested, you can download the updated spreadsheet here: et2_DMY_dates.zip (75 kb) Updated 8 Apr. 2010
About six months after I started using SBTM at one company, I gave an overview presentation to the Development Team on how we do our testing. I wanted the developers and leads to be familiar with the new terminology that we used - for example, get used to hearing things like 'sessions' and 'debriefs' instead of words like 'test plans' and 'test cases'. Most importantly, I wanted to get across the importance of an 'uninterrupted session'. Distraction levels were generally high but after the presentation the situation improved as people started asking us if we were "in a session" before interrupting us to do or discuss something else.
Sometime afterwards, I talked with the VP of R&D to get some feedback about our testing approach. He was generally very pleased and said something that I hadn't thought much about until that moment. He said that the success of our approach definitely depended upon having a good Senior person in the Test Lead role managing the testing effort.
Hmm. This made me think. It just so happened that I had many years experience as a Team Lead, and I was definitely Senior in terms of my testing skills and knowledge, so was it a coincidence that we were successful? Or were we successful because I had both the passion and experience required to succeed?
What are some of the things I do in the Test Lead role in Session-Based Test Management?
There are more things that I could probably include, but the list above is a reasonable start. Is there anything there that I wouldn't expect of a Test or Team Lead managing their testing effort using a different approach? No, not really. But that's the point isn't it? The Tools are a bit different and the steps or processes are not spelled out for you. Thinking is required.
I think it's a valid requirement to say that someone in the Test Lead position needs to have some prior experience managing testing efforts. They should have a pretty good idea of how to adapt to a new way of working, how to adequately support the team as they work, and how to communicate with managers outside of the test team.
Don't let these slip! When we first started with SBTM, there were times when we were extremely busy and putting in overtime hours that we would get behind on our session debriefs. It was easy to say that we could make better use of our time by doing testing rather than reviewing the testing already done. That makes sense, right?
Well actually, no, it doesn't work that way.
Here's the thing: session debriefs are like Code Reviews for programmers. Programmers don't have to do them either, but when you stop doing reviews you often notice the quality of work start to decline. When we finally got around to reviewing the backlog of session sheets, we would often discover additional tests and risks that would have been worth exploring. You don't really want to get these kinds of epiphanies after a release has gone out the door. It's too late then.
The 'debrief' aspect of SBTM is more than just a quality check. It's a complement to your Exploratory Testing effort by putting your respective heads together and revisiting the test strategy described in the Test Notes for elements of coverage, completeness, risk and repeatability. "Two heads are better than one" is the force at work here to increase your individual testing powers. Each team member benefits as the testing notes are shared and the knowledge helps build newer and better tests as you go along.
We never skip these debriefs anymore. Even when the going gets tough and deadlines are tight, we always make time every day to stop and debrief the sessions from the previous day. Personally, I like to do these first thing in the morning. It gives me a good idea of the problems we encountered from the previous day and lets me set or adjust the plan for the current day's work.
There was a question I remember asking people during the time when I was a Quality Auditor years ago: "How do you know what you're working on when you come in on any given workday?" It was a question to get someone talking about how they set priorities, where they get their tasks from, lines of communication, and so on. By doing the session debriefs in the morning, I know that I only have to worry about my session sheets being complete at the end of any given day. If I have to stay late, I stay late and finish up what I need to. But when I go home I don't generally worry about my work priorities for the next day. I know that as soon as I come in, my session sheets will be reviewed and I will review other people's sessions, and within the hour we will all know what we're working on for the rest of the day. Risks discussed, test strategies confirmed or adjusted, priorities clarified, we've got the game plan... it's time to Rock and Roll!
Don't let these slip. Do the session debriefs every day.
The session sheet template has many important elements, none of which I would want to give up completely managing or tracking in some way or another. The single most important element for me has to be the 'Test Notes' section though.
When I review a session sheet, I know that I can 'approve' it when I can say to myself that I know more or less exactly what someone did for the 'Duration' listed in the sheet. Whether it is one hour or four, if I can't say to myself that I know what this person has spent that whole time doing, then the session report is incomplete.
The analogy that I often give at times like these is that a session sheet is like a Science Report that you probably had to do at some point in elementary or high school. A typical Science Report has elements like the following: Objective/Purpose, Materials and Methods, Data and Observations, Conclusions or Inferences.
A good session report has all the same elements, although they may be laid out in a slightly different way.
The 'Charter' is the 'Objective'. It sets the scope for the session and should help you know when testing is complete and it's time to move on to something else. A Charter can be as specific or general as you like to help you get the information you need in the time allotted. Testing is for a specific purpose - what's that purpose?
The 'Materials and Methods', 'Data and Observations', 'Conclusions or Inferences' all seem to fall into the "Test Notes" section. That makes this section pretty important. So how do you get good at writing Test Notes to give you all this information? The same way you get to Carnegie Hall - practice, practice, practice!
A good note-taker is like a good Sports Commentator with a twist. A Sports Commentator gives you the play-by-play of the particular game or sporting event that you are watching. A really good commentator also commentates. That is, provide you with background information on the players, the team, or some opinions about how or why certain things might have happened.
A good Test Note-taker does the same thing, and also needs to provide sufficient information to make their test session repeatable. Sounds simple enough, right? Ha! Try it.
You need to start by being clear about your 'Materials and Methods' (a.k.a. "The Setup").
Next there are the 'Data and Observations'.
Finally there are the 'Conclusions and Inferences'.
The above elements may seem like a lot to put into your Test Notes. To the novice tester, it probably is. There's a learning curve required as novices start off by putting too much information into the Test Notes section. To the more experienced tester, there are many shortcuts and assumptions that can be clarified in the notes to help you get to and focus on the important stuff. For example, I've read many good session reports that addressed all the elements and questions above and were only a few sentences long.
How do you know what's worth writing and what's worth skipping or implying? You need practice and a good Test Lead who cares about the quality and repeatability of the session notes you produce. It isn't easy, but it's a good habit to develop. Remember that Session-Based Test Management is also referred to as "High Accountability Exploratory Testing". So if you're not prepared to provide the 'accountability' then you might want to think about changing teams.
In the end, when you get good at writing test notes, I think you'll have a lot in common with those good scientists who keep journals as records of the experiments they perform and the discoveries they make. This is applied Creative Writing. Put your 'Thinking Cap" on because not only do you have to think to do your job, but you need to be your own personal commentator as you go along and do it!
The original 'sessions.exe' archive includes a sample spreadsheet called ET.XLS that can be used to generate some interesting metrics based on the scanned session reports. It took me a few projects over the course of several months to figure out all the little details of how this spreadsheet works. The spreadsheet that I use now generates considerably more charts and information based upon the submitted reports than the original/default spreadsheet. I highly recommend that you play with this spreadsheet, get to know what the numbers and charts mean, and customise it to your own needs.
Getting Started with ToDos
To begin with I tried to understand what each of the sheets/tabs in the Excel Workbook did so that I could understand what data would be useful to me in my current context/situation. Not all of the details are documented and one item in particular took me a while to figure out was the "TODO" sheet titled "TODO Items in the Hopper".
I didn't know how to get data into this report. The sequence of steps seemed a bit odd, so here's what I found out:
The ET.XLS file just never seems to detect any of the files in the 'c:\sessions\todos' folder/hopper, so what gives?
It turns out that while step 3 puts the empty session reports into the 'todos' folder, if you want these files to show up in the 'hopper' spreadsheet page, you need to move them into the 'approved' folder. Then when you run the 'scan-approved-then-run-report.bat' tool, it will detect the ToDo sheets and generate the required report that can be imported into the ET.XLS spreadsheet.
Once you know that you can make adjustments accordingly. In the end, I don't use this spreadsheet to manage the ToDo hopper, I use another approach. It just took me a while to figure this tidbit out so I thought I would share it here.
Charts - Just the Beginning
The nice thing about having these numbers in an Excel spreadsheet is that you can then generate all sorts of charts and metrics to help you manage the testing effort.
The "Exploratory Testing: Test Sessions" chart on the first "Summary" worksheet/tab took me a while to really grasp. Once I figured out that it is essentially a productivity chart that shows you the cumulative number of sessions performed over time, I was able to use that information in a meaningful way.
A new chart that I include in my spreadsheet is the cumulative number of BUGS reported (according to the submitted session reports) over time. I then superimpose that data series on top of the Summary sheet chart so that I can see both the number of sessions and the cumulative number of bugs reported over time on the same chart. I now find this chart to be even more useful to me on the day-to-day management of our test projects.
I have other charts for coverage, but I keep those charts separate. I think I have more work to do when it comes to chart and metric analysis. The good thing is that I have the data and numbers to work with. I just need the time to sit down and sift through it.
The session reports are like gold. We keep every session report we've ever created both on the network and stored safely away in a document control system. We have instant access to the testing notes going back to the initial release of our flagship software from several years ago. That's amazing!
The first thing that really blew me away about SBTM was that all my test records are stored electronically as simple text files on the network. I've been doing QA and testing software since the early 90's and this is the first time that my test results have ever, really, truly been completely paperless. Being something of an environmentalist that makes me somewhat pleased. The sheer efficiency of it is also marvelous for anyone who has ever done process improvement work before.
Talking about it is one thing, but I think an example is in order.
There was this one time when a Lead developer came up to me to talk about a particular feature in the application that we were testing. He asked if I knew how or when I tested it last because he couldn't find any bug reports in the system relating to that feature. I used my ruby 'search' script to scan through all the archived session reports on the network. (BTW, a colleague of mine uses a visual grep utility to do the same thing - same result, different tool.) Within a minute, I had called up several session reports from a year before (3 releases back) that described the full extent of what we had tested, what we didn't, and more importantly what bugs we had reported during those sessions. With that knowledge we were able to devise an appropriate test strategy to complete the testing of the updated feature in the current release we were working on.
It was surreal. Simply fantastic! I don't ever recall having had the ability to search through my complete testing records for the last 3 years so quickly to bring up the exact moment and details of when I had reported specific bugs down to the minute. I challenge anyone using more 'traditional' test documentation approaches to live up to that kind of standard.
This is the information age, and "time is money." If you can't tell me what you've tested at any given moment over the last 3-5 years, how you tested it, who worked on it, what bugs you reported and any issues or problems that you encountered along the way, within minutes, then you need to take a serious look at the usefulness of your test management approach.
Text files aren't glorious. They aren't even particularly fancy. They are easily readable on every major operating system that I've worked with, though, they also compress really well in archive (zip) files, and they are easily searchable with a myriad of tools readily available on the internet. As a result, I can find out the Who, What, Where, When, How and Why of any testing activity our test team has ever performed since we first started keeping records several years and over a dozen releases back - all within Google time. That's progress!
This isn't a part of Session-Based Test Management, but is such an important complement to it that I feel I need to mention it here.
All these session reports you keep add up over time. So what do you do when the next major release comes around and you find yourself having to perform regression testing on the major features from the last release?
Well for a start, I wouldn't recommend that you start exploring those features from scratch all over again. You already went through that exercise once in great detail and it probably took you anywhere from several days to several weeks to cover a particularly interesting or complex feature. Remember that SBTM is the framework to support your Exploratory Testing effort, and that ET is the simultaneous Learning, Test Design, and Test Execution of your application.
You already learned something when you tested a feature the last time, the first time you saw it. Testing the same feature over again will have a greatly diminished learning curve this time around. So how can you efficiently regression test a feature that took several weeks to cover the first time, in a timely manner this time?
Start by asking yourself what did you actually learn from your last testing experience? You need to compile the session reports that cover a particular feature or area of the application in order to see the complete picture. When you bring them all together, you can identify the facts that you need to develop a good test strategy starting point. You should be able to extract things like:
Once you have these questions and answers in mind, it's time to bring them all together into a new document - a document I call a "Test Guide". This document summarizes the best part of the testing strategy employed and what you learned the last time you tested a particular feature or area of the application. We use MS Word documents for our Test Guides, we keep them simple & useful and anywhere from two to ten pages in length. Don't use cover pages or Table of Contents or any of that fluff.
I should be clear that a Test Guide is just that - a guide - and contains no specific test cases. We sometimes describe interesting scenarios, but no test cases. I think I need to mention it again - NO TEST CASES! Individual test cases are useless unless used to illustrate an example of an application of a particular test technique.
Could a Test Guide contain Use Cases? Maybe. Are these use cases documented anywhere else? If so, then go and hit yourself over the head with your keyboard for asking such a question. A Test Guide wouldn't be easily maintainable if you duplicated information that is better maintained elsewhere. Just identify the link or location of where to get the other document(s) and let someone else worry about maintaining the use cases separately from your Test Strategy documents. If the use cases that you discovered during your testing are not documented anywhere else, I would recommend you keep them documented separately anyway. If you have nowhere better to store them, put them at the end of the document in an "Appendix" and just refer to them in the right place(s) in your strategy guide.
When is a good time to work on creating these Test Guides? Good question. It depends. If you have to test a feature right away and you haven't already got a guide, then just review the past session reports as a prelude to complete the testing you need to do now. Perhaps you can create a Test Guide draft while you go through the material the second time around.
Generally, we work on these Test Guides in that down-time period after a release has shipped and before the next development project is ready for testing. Some people might call this test planning. I would probably more accurately call it "reviewing what we have learned to do it better and more efficiently the next time."
Now when we test an area that has been tested before, we just refer to the specific Test Guide in our session report Test Notes section. It's clear and efficient and helps us focus on what's important: learning something new and not relearning what we have already forgotten.
Have a question? Think something is missing? Drop me a line to ask any questions you may have about SBTM or my notes above.
©2011 Paul Carvalho. Contact me at: paul [at] staqs [dot] c o m