Personal tools
You are here: Home Production Proyectos Testing Documentos de trabajo Procedimientos de testeo de gvSIG Información relevante Useful tips to generate test plans
Document Actions

Useful tips to generate test plans

by Roser Soler last modified 2010-06-04 16:46

Details of each level of the test plan and an example of the gvSIG JCRS extension.

.. contents::

Here we assume that the objective is testing the appropriate operation of the requirements of an application version. To do so, follow these tips on every hierarchy level. 

At each level of the test plan an example of the JCRS extension is enclosed. At the end of this document there is a figure with the example.

Description for each level 
---------------------------


+ **System**: 

  The system generally, is the application itself. In this case the system is gvSIG Desktop or  gvSIG Mobile. 

  In the pilot plan the system is gvSIG Desktop v1.1.2. 


+ **Subsystem**:

  The subsystem defined in the test plan must be related to the gvSIG extensions, whether they are embedded in the host application or not. Be careful not to try to test a functionality contained on an extent not installed on the computer. Try to define the possible subsystems so they  are independent of each other. This way, associating  subsystems to gvSIG extensions (both embedded and external) certain independence is guaranteed. 

  One of the subsystems of the pilot test plan is the JCRS v0,2 extension. In general, we aim to define a subsystem for each directory found in ... / bin / gvSIG / extensions. At this respect the functionalities to be tested in any subsystem are the ones provided on gvSIG extensions (which can be accessed through the Preferences panel) hanging from the directory that defines the subsystem. 

+ **Módules**: 

  The modules defined in the subsystem must represent all the functions covered by that particular subsystem. Each module will be associated with a verb as shown in the example of the pilot plan. 

  In the pilot test plan modules are: 1. "*Define CRS*" 2. "*Transform CRS*” and 3."*Define user CRS*”. A small comment on module “*Define user CRS*": it seems a priori to be a  particular case of module 1, but using the JCRS extension we realize that  module 1 concerns CRS assignment, while  module 3 implies CRS editing. 

  Modules don´ t need to coincide with each subsystem extension. Subsystem functionalities should be grouped so it is easy for the tester to define / execute test cases. Similar functionalities should go to the same module. 

+ **Use cases**:

  These cases define what each module does from the user point of view. Use cases represent different ways to make use of the functionality defined in the corresponding module.

  In the pilot test plan, for module 1,  use cases are: 1.1 "*Set default CRS*", 1.2 "*Define view CRS*" and 1.3 "*Set Layer CRS*".


+ **Test cases**:

  Test cases represent the possible paths (sequence of steps) with which we test a specific use case. At this point we have to consider the possible paths in front of us, appearing on a window / form. Additionally, each test case must be detailed step by step to avoid ambiguous interpretations of the test.

  We will generate 2 different test cases once we find out that the selection / inclusion of a parameter  results in different setting options. For example, if selecting a parameter from a list changes the options the user sees,  we have to develop a test case for each of the available options on the list.

  When we need to describe a test case specifying a set of steps that have already been defined in another test case, more simple than the first one, we will try not to repeat the entire set of steps, referring to the name of the test case defined above. This brings some implications if the test fails: suppose that we are running test B where a set of steps to follow are defined in test A. If test A fails, but we have not passed test B previously, test B will also fail. One solution may be to impose preconditions to test cases, for example, having passed test A to begin test B. These preconditions can be defined in the comments of test B.

  Variants within a single test case, that do not change the way to pass the test are called variables. Each variable must be identified by a [var_name], which takes different values depending on the specific case being tested.

  For test case definition we will prioritize according to the most common options used in gvSIG, at least for a first version of the test plan. Then, for these versions will increase test cases as much as possible.

  In the pilot test plan for module 1 and use case 1, some of the identified test cases are : 1.1.1 "*Set default CRS type EPSG and press OK*", 1.1.2 "*Set default CRS type EPSG, consult InfoCRS and press OK*", 1.1.3 "*Set default CRS type EPSG and press Cancel*", etc.. Additionally, each test case must have the sequence of steps that define it. For exemple, for the test case 1.1.1 we have:

  Test case: "*Set default CRS type EPSG and press OK*"

     1. Open gvSIG.

     2. Click on the Preferences.

     3. Select View from the Preferences tree.

     4. Click on Current Projection.

     5. On Type, select EPSG.

     6. Verify that the option [select_option] is selected.

     7. Write the search query [string] to search.

     8. Click on Search.

     9. Click OK in the New CRS window.

    10. OK the Preferences window.

    11. Create new View.

    12. Select the View created.

    13. Open the View.

    14. Check the [verification_condition].


+ **Scenarios**:

  We create the scenario when we particularize the test case with the values of the variables.

  There are input and output variables (input and output data). Input variables must be defined prioritizing the most used cases in gvSIG, because it is impossible to cover all the possibilities within a reasonable time consumption. Output variables must fit the application expected behavior. At this point we can talk of validating the test in terms of checking out the output variables.

  When it is necessary to use any external file as an input variable (cartography, symbology, etc.) we will have to define the complete file name and describe where it is located.

  When the output variables are the verification of several parameters on the same window, we will try to define the group as a single output data to test.

  When defining scenarios, we first define the input data which verifies that the functionality is properly implemented. On this regard one must consider the different possibilities of input data, but bearing in mind that you can not test all possible input values. Due to this, we have to consider defining sets of input data that are equivalent to each other (this is the definition of equivalence class).

  Finally, we should create scenarios for input values that make the application crash (to put numerical values into text fields, etc.). This way we control the error/warning messages  that should appear. This is how we should test the controlled exceptions in the subsystem.

  In the pilot test plan, for module 1, use case 1.1 and test case 1.1.1, an identifyed scenario is:

  "*Scenario 1*":

  Input data -> [select option] = By code, [string] = 63096405

  Output data -> [verification condition] = EPSG: 63096405 in the status bar of the created view.

Schematic example JCRS extension
---------------------------------




.. figure:: images/Ejemplo_JCRS_8.png
   :align: center


.. figure:: images/Ejemplo2_JCRS_8.png
   :align: center



Other features
----------------


+  **Geoprocesses**: gvSIG creates internally an abstract class for each type of data, and it applies the corresponding geoprocess to that type of data. For example a layer of lines  will be treated the same way independently o its origin,  it does not matter if the lines come from a dxf or shp, treatment and routines are the same.


+  **Main gvSIG**: One of the major problems is structuring the application in modules, use cases and test cases. When addressing a PDP of a module we can run into two problems that have the same solution.

  First of all, it is usual to reach the test case, and need more levels to cover the whole tool or application of the specific module. A test plan is structured into 5 levels: Subsystem, Module, Use Case, Test Case and Scenario. As we subdivide the application functuonalities to fit the test plan structure, in many cases we'll get to the test case and its different scenarios and we'll still be needing more sublevels to face all the functionalities offered by the tool. At this point we have to ask ourselves wether it would be better to separate the module into other modules, so it's something like:

  Module A --- Use Cases --- Test Cases, and in a step / action of the test case we are referred to another Module B, which contains Use Cases and Test Cases, and so on .

  This approach is possible as long as it has a certain logic, for example when the functionality to separate in an independent module is common to other tools, and we don't leave the other module “in the lurch”. Then we have the tools or windows  common to other tools: Color Selector  or  Symbology selector. These windows can be reached by different ways, so it's worth separating them into different modules.

  When developing the Graphic Properties test cases, for point graphics, line and polygon (rectangle, circle and polygon) we face the situation described above: the windows openned when we access the properties of these type of graphics are the same as in the symboly selector for each type of geometry. That is: the graphic properties window (for point-graphs) is the same as the symbology selector window (for point symbols), and the same happens with lines and polygons. Therefore it is advisable when developping the steps of test cases related to the properties of such graphs, to refer to the appropriate symbology selector test case when you get to the opening graphs properties action.

  Another way to save time and effort is this: In many cases, a tool or a dialog box can be opened from several sites, like in the above example: graphic properties (map module), which can be accessed from the drop down menu map, right clicking on the mouse button and double clicking on the graph. In these cases it is advisable to develop all test cases on the base of the same road, and apart, to make a test case covering the other roads, just developping it until the properties window pop up, because once you open the properties window, what comes next is common to the other test cases.

  Therefore we should make a test case where the actions double-clicking on each graph type and then right click on each chart type got linked, introducing a test condition for each action that opens the properties window. Note: graphs properties for View Control, Legend, North ... are different from the point, line and polygon graphics, so it should be developed in independent test cases.

  We can also put together, in an individual test case, testing tools or simple features  not requiring a dialog box for their use, which provide an immediate visual result (immediate verification) For example  navigation tools : zoom in, zoom out, panning, zoom extent, previous zoom, and so on. It's a waste of time having to open and close the program just to prove a zoom in.

View source document


Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: