1 | |
---|
2 | -*- text -*- |
---|
3 | |
---|
4 | ============================ |
---|
5 | Writing tests for Subversion |
---|
6 | ============================ |
---|
7 | |
---|
8 | |
---|
9 | |
---|
10 | * Test structure |
---|
11 | * Test results |
---|
12 | ** C test coding |
---|
13 | ** Python test coding |
---|
14 | * On-disk state |
---|
15 | * 'svn status' status |
---|
16 | * Differences between on-disk and status trees |
---|
17 | * 'svn ci' output |
---|
18 | * Gotcha's |
---|
19 | |
---|
20 | |
---|
21 | Test structure |
---|
22 | ============== |
---|
23 | |
---|
24 | Tests start with a clean repository and working copy. For the |
---|
25 | purpose of testing, we use a versioned tree going by the name |
---|
26 | 'greektree'. See subversion/tests/greek-tree.txt for more. |
---|
27 | |
---|
28 | This tree is then modified to become in a state in which we want |
---|
29 | to test our program. This can involve changing the working copy |
---|
30 | as well as the repository. Several commands (add, rm, update, |
---|
31 | commit) can be required to bring the repository/working copy in |
---|
32 | the desired state. |
---|
33 | |
---|
34 | If the working copy and repository are in the required |
---|
35 | pre-condition, the command-to-be-tested is executed. After |
---|
36 | execution, the output (stdout, stderr), on-disk state and |
---|
37 | 'svn status' are checked to verify the command worked as expected. |
---|
38 | |
---|
39 | If you need commands to construct the working copy+repository state, |
---|
40 | checks as described above apply to each of the intermediate commands |
---|
41 | just as they do to the final command. That way, failure of the final |
---|
42 | command can be narrowed down to just that command, because the |
---|
43 | working copy/repository combination was knowingly in the correct |
---|
44 | state. |
---|
45 | |
---|
46 | |
---|
47 | Test results |
---|
48 | ============ |
---|
49 | |
---|
50 | Tests can generate 2 results: |
---|
51 | |
---|
52 | - Success, signalled by normal function termination |
---|
53 | - Failure, signalled by raising an exception |
---|
54 | In case of python tests: an exception of type SVNFailure |
---|
55 | In case of C tests: return an svn_error_t * != SVN_NO_ERROR |
---|
56 | |
---|
57 | Sometimes it's necessary to code tests which are supposed to fail, |
---|
58 | if Subversion should behave a certain way, but does not yet. Tests |
---|
59 | like these are marked XFail (eXpected-to-FAIL). If the program is |
---|
60 | changed to support the tested behaviour, but the test is not adjusted, |
---|
61 | it will XPASS (uneXpectedly-PASS). |
---|
62 | |
---|
63 | Next to normal and XFAIL status tests, there's also conditional |
---|
64 | execution of tests, by marking them Skip(). A condition can be |
---|
65 | given for which the skip should take effect, executing the test |
---|
66 | otherwise. |
---|
67 | |
---|
68 | |
---|
69 | |
---|
70 | ** C test coding |
---|
71 | ================ |
---|
72 | |
---|
73 | (Could someone fill in this section please?!) |
---|
74 | |
---|
75 | |
---|
76 | |
---|
77 | ** Python test coding |
---|
78 | ===================== |
---|
79 | |
---|
80 | The python tests abstract from ordering problems by storing status |
---|
81 | information in trees. Comparing expected and actual status means |
---|
82 | comparing trees - there are routines to do the comparison for you. |
---|
83 | |
---|
84 | Every command you issue should use the |
---|
85 | svntest.actions.run_and_verify_* API. If there's no such function |
---|
86 | for the operation you want to execute, you can use |
---|
87 | svntest.main.run_svn. Note that this is an escape route only: |
---|
88 | the results of this command are not checked meaning you should |
---|
89 | include any checks in your test yourself. |
---|
90 | |
---|
91 | |
---|
92 | On-disk state |
---|
93 | ============= |
---|
94 | |
---|
95 | On-disk state objects can be generated with the |
---|
96 | svntest.tree.build_tree_from_wc() function which describe the actual |
---|
97 | state on disk. If you need an object which describes the unchanged |
---|
98 | (virginal) state, you can use svntest.actions.get_virginal_state(). |
---|
99 | |
---|
100 | Testing for on-disk states is required in several instances, among |
---|
101 | which: |
---|
102 | - Checking for specific file contents (after a merge for example) |
---|
103 | - Checking for properties and their values |
---|
104 | |
---|
105 | |
---|
106 | 'svn status' status |
---|
107 | =================== |
---|
108 | |
---|
109 | Normally any change is at least validated (pre- and post-processing) |
---|
110 | by running run_and_verify_status, or passing an expected_status to |
---|
111 | one of the other run_and_verify_* methods. |
---|
112 | |
---|
113 | A clean expected_status can be obtained by calling |
---|
114 | svntest.actions.get_virginal_state(<wc_dir>, <revision>). |
---|
115 | |
---|
116 | |
---|
117 | Differences between on-disk and status trees |
---|
118 | ============================================ |
---|
119 | |
---|
120 | Both on-disk and status information is recorded in equal structures, |
---|
121 | but there are some differences in the elements that are assigned to |
---|
122 | files in each case: |
---|
123 | |
---|
124 | Fieldname On-disk status |
---|
125 | |
---|
126 | Contents X - |
---|
127 | Properties X - |
---|
128 | Status - X |
---|
129 | |
---|
130 | ###Note: maybe others? |
---|
131 | |
---|
132 | 'svn ci' output |
---|
133 | =============== |
---|
134 | |
---|
135 | Most methods in the run_and_verify_* API take an expected_output |
---|
136 | parameter. This parameter describes which actions the command line |
---|
137 | client should report to be taking on each target. So far there are: |
---|
138 | |
---|
139 | - 'Adding' |
---|
140 | - 'Deleting' |
---|
141 | - 'Replacing' |
---|
142 | - 'Sending' |
---|
143 | |
---|
144 | |
---|
145 | Gotcha's |
---|
146 | ======== |
---|
147 | |
---|
148 | * Minimize the use of 'run_command' and 'run_svn' |
---|
149 | |
---|
150 | The output of these commands is not checked by the test suite |
---|
151 | itself, so if you really need to use them, be sure to check |
---|
152 | any relevant output yourself. |
---|
153 | |
---|
154 | If you have any choice at all not to use them, please don't. |
---|
155 | |
---|
156 | * Tests which check for failure as expected behaviour should PASS |
---|
157 | |
---|
158 | The XFAIL test status is *only* meant for tests which check for |
---|
159 | not-yet-but-expected-to-be supported program behaviour. |
---|
160 | |
---|
161 | * File accesses can't use hardcoded '/' characters |
---|
162 | |
---|
163 | Because the tests need to run on platforms with different path |
---|
164 | separators too (MS Windows), you need to use the os.path.join() |
---|
165 | function to concatenate path strings. |
---|
166 | |
---|
167 | * Paths within status structures *do* use '/' characters |
---|
168 | |
---|
169 | Paths within expected_status or expected_disk structures use '/' |
---|
170 | characters as path separators. |
---|
171 | |
---|
172 | * Don't forget to check output for correct output |
---|
173 | |
---|
174 | You need to check not only whether a command generated output, but |
---|
175 | also if that output meets your expectations: |
---|
176 | |
---|
177 | - If the program is supposed to generate an error, check |
---|
178 | if it generates the error you expect it to. |
---|
179 | - If the program does not generate an error, check that |
---|
180 | it gives you the confirmation you expect it to. |
---|
181 | |
---|
182 | * Don't forget to check pre- and post-command conditions |
---|
183 | |
---|
184 | You need to verify that the status and on-disk structures are |
---|
185 | actually what you think they are before invoking the command |
---|
186 | you're testing. Likewise, you need to verify that the command |
---|
187 | resulted in expected output, status and on-disk structure. |
---|
188 | |
---|
189 | * Don't forget to check! |
---|
190 | |
---|
191 | Yes, just check anything you can check. If you don't, your test |
---|
192 | may be passing for all the wrong reasons. |
---|