source: valtobtest/subversion-1.6.2/notes/test-writing @ 3

Last change on this file since 3 was 3, checked in by valtob, 15 years ago

subversion source 1.6.2 as test

File size: 6.1 KB
Line 
1
2                                                        -*- text -*-
3
4                     ============================
5                     Writing tests for Subversion
6                     ============================
7
8
9
10 * Test structure
11 * Test results
12** C test coding
13** Python test coding
14 * On-disk state
15 * 'svn status' status
16 * Differences between on-disk and status trees
17 * 'svn ci' output
18 * Gotcha's
19
20
21Test structure
22==============
23
24Tests start with a clean repository and working copy.  For the
25purpose of testing, we use a versioned tree going by the name
26'greektree'.  See subversion/tests/greek-tree.txt for more.
27
28This tree is then modified to become in a state in which we want
29to test our program.  This can involve changing the working copy
30as well as the repository.  Several commands (add, rm, update,
31commit) can be required to bring the repository/working copy in
32the desired state.
33
34If the working copy and repository are in the required
35pre-condition, the command-to-be-tested is executed.  After
36execution, the output (stdout, stderr), on-disk state and
37'svn status' are checked to verify the command worked as expected.
38
39If you need commands to construct the working copy+repository state,
40checks as described above apply to each of the intermediate commands
41just as they do to the final command.  That way, failure of the final
42command can be narrowed down to just that command, because the
43working copy/repository combination was knowingly in the correct
44state.
45
46
47Test results
48============
49
50Tests can generate 2 results:
51
52  - Success, signalled by normal function termination
53  - Failure, signalled by raising an exception
54     In case of python tests: an exception of type SVNFailure
55     In case of C tests:      return an svn_error_t * != SVN_NO_ERROR
56
57Sometimes it's necessary to code tests which are supposed to fail,
58if Subversion should behave a certain way, but does not yet.  Tests
59like these are marked XFail (eXpected-to-FAIL).  If the program is
60changed to support the tested behaviour, but the test is not adjusted,
61it will XPASS (uneXpectedly-PASS).
62
63Next to normal and XFAIL status tests, there's also conditional
64execution of tests, by marking them Skip().  A condition can be
65given for which the skip should take effect, executing the test
66otherwise.
67
68
69
70** C test coding
71================
72
73(Could someone fill in this section please?!)
74
75
76
77** Python test coding
78=====================
79
80The python tests abstract from ordering problems by storing status
81information in trees.  Comparing expected and actual status means
82comparing trees - there are routines to do the comparison for you.
83
84Every command you issue should use the
85svntest.actions.run_and_verify_* API. If there's no such function
86for the operation you want to execute, you can use
87svntest.main.run_svn.  Note that this is an escape route only:
88the results of this command are not checked meaning you should
89include any checks in your test yourself.
90
91
92On-disk state
93=============
94
95On-disk state objects can be generated with the
96svntest.tree.build_tree_from_wc() function which describe the actual
97state on disk.  If you need an object which describes the unchanged
98(virginal) state, you can use svntest.actions.get_virginal_state().
99
100Testing for on-disk states is required in several instances, among
101which:
102 - Checking for specific file contents (after a merge for example)
103 - Checking for properties and their values
104
105
106'svn status' status
107===================
108
109Normally any change is at least validated (pre- and post-processing)
110by running run_and_verify_status, or passing an expected_status to
111one of the other run_and_verify_* methods.
112
113A clean expected_status can be obtained by calling
114svntest.actions.get_virginal_state(<wc_dir>, <revision>).
115
116
117Differences between on-disk and status trees
118============================================
119
120Both on-disk and status information is recorded in equal structures,
121but there are some differences in the elements that are assigned to
122files in each case:
123
124    Fieldname                    On-disk         status
125
126    Contents                        X               -
127    Properties                      X               -
128    Status                          -               X
129
130###Note: maybe others?
131
132'svn ci' output
133===============
134
135Most methods in the run_and_verify_* API take an expected_output
136parameter.  This parameter describes which actions the command line
137client should report to be taking on each target.  So far there are:
138
139 - 'Adding'
140 - 'Deleting'
141 - 'Replacing'
142 - 'Sending'
143
144
145Gotcha's
146========
147
148 * Minimize the use of 'run_command' and 'run_svn'
149
150   The output of these commands is not checked by the test suite
151   itself, so if you really need to use them, be sure to check
152   any relevant output yourself.
153
154   If you have any choice at all not to use them, please don't.
155
156 * Tests which check for failure as expected behaviour should PASS
157
158   The XFAIL test status is *only* meant for tests which check for
159   not-yet-but-expected-to-be supported program behaviour.
160
161 * File accesses can't use hardcoded '/' characters
162
163   Because the tests need to run on platforms with different path
164   separators too (MS Windows), you need to use the os.path.join()
165   function to concatenate path strings.
166
167 * Paths within status structures *do* use '/' characters
168
169   Paths within expected_status or expected_disk structures use '/'
170   characters as path separators.
171
172 * Don't forget to check output for correct output
173
174   You need to check not only whether a command generated output, but
175   also if that output meets your expectations:
176
177     - If the program is supposed to generate an error, check
178       if it generates the error you expect it to.
179     - If the program does not generate an error, check that
180       it gives you the confirmation you expect it to.
181
182 * Don't forget to check pre- and post-command conditions
183
184   You need to verify that the status and on-disk structures are
185   actually what you think they are before invoking the command
186   you're testing.  Likewise, you need to verify that the command
187   resulted in expected output, status and on-disk structure.
188
189 * Don't forget to check!
190
191   Yes, just check anything you can check.  If you don't, your test
192   may be passing for all the wrong reasons.
Note: See TracBrowser for help on using the repository browser.