Changeset 4886
- Timestamp:
- 23 Jan 2010, 18:02:25 (15 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
waeup/trunk/src/waeup/utils/batching.txt
r4879 r4886 213 213 ... setattr(obj, key, value) 214 214 215 If we also want the results being logged, we must provide a logger 216 (this is optional): 217 218 >>> import logging 219 >>> logger = logging.getLogger('stoneville') 220 >>> logger.setLevel(logging.DEBUG) 221 >>> logger.propagate = False 222 >>> handler = logging.FileHandler('stoneville.log', 'w') 223 >>> logger.addHandler(handler) 224 215 225 Create the fellows: 216 226 … … 218 228 >>> processor.doImport('newcomers.csv', 219 229 ... ['name', 'dinoports', 'owner', 'taxpayer'], 220 ... mode='create', user='Bob' )230 ... mode='create', user='Bob', logger=logger) 221 231 (4, 0) 222 232 … … 248 258 The ``user`` parameter finally is optional and only used for logging. 249 259 250 We can, by the way, see the results of our run in a logfile which is 251 named ``newcomers.csv.create.msg``: 252 253 >>> print open('newcomers.csv.create.msg').read() 254 Source: newcomers.csv 255 Mode: create 256 Date: ... 257 User: Bob 258 Failed datasets: newcomers.csv.create.pending 259 Processing time: ... s (... s/item) 260 Processed: 4 lines (4 successful/ 0 failed) 261 <BLANKLINE> 260 We can, by the way, see the results of our run in a logfile if we 261 provided a logger during the call: 262 263 >>> #print open('newcomers.csv.create.msg').read() 264 >>> print open('stoneville.log').read() 265 -------------------- 266 Bob: Batch processing finished: OK 267 Bob: Source: newcomers.csv 268 Bob: Mode: create 269 Bob: User: Bob 270 Bob: Processing time: ... s (... s/item) 271 Bob: Processed: 4 lines (4 successful/ 0 failed) 272 -------------------- 262 273 263 274 As we can see, the processing was successful. Otherwise, all problems … … 266 277 >>> processor.doImport('newcomers.csv', 267 278 ... ['name', 'dinoports', 'owner', 'taxpayer'], 268 ... mode='create', user='Bob' )279 ... mode='create', user='Bob', logger=logger) 269 280 (4, 4) 270 281 271 282 The log file will tell us this in more detail: 272 283 273 >>> print open('newcomers.csv.create.msg').read() 274 Source: newcomers.csv 275 Mode: create 276 Date: ... 277 User: Bob 278 Failed datasets: newcomers.csv.create.pending 279 Processing time: ... s (... s/item) 280 Processed: 4 lines (0 successful/ 4 failed) 284 >>> #print open('newcomers.csv.create.msg').read() 285 >>> print open('stoneville.log').read() 286 -------------------- 287 ... 288 -------------------- 289 Bob: Batch processing finished: FAILED 290 Bob: Source: newcomers.csv 291 Bob: Mode: create 292 Bob: User: Bob 293 Bob: Failed datasets: newcomers.csv.create.pending 294 Bob: Processing time: ... s (... s/item) 295 Bob: Processed: 4 lines (0 successful/ 4 failed) 296 -------------------- 281 297 282 298 This time a new file was created, which keeps all the rows we could not … … 439 455 >>> os.unlink('newcomers.csv') 440 456 >>> os.unlink('newcomers.csv.create.pending') 441 >>> os.unlink('newcomers.csv.create.msg') 442 >>> os.unlink('newcomers.csv.remove.msg') 443 >>> os.unlink('newcomers.csv.update.msg') 457 >>> os.unlink('stoneville.log')
Note: See TracChangeset for help on using the changeset viewer.