George Adams
George Adams
I'm +1 to this. We should definetely document the test failures as we skip modules
@MylesBorins how have you currently been doing this? would it not be easier to have a tidy report at the end of the output?
so you're more for making this a permanent fixture?
No the way I view it is that at the end of the citgm run there will be an output summarising any modules that are marked as flaky but are...
Yes I think we are actually, is there any harm to implementing this? It means that we catch fixed modules quicker
@MylesBorins we can always use environment variables in the `lookup.json`?
@MylesBorins how are we going to add this if we can't pass in custom test commands?
@TheLarkInn the way that citgm currently works is that it runs `npm install` and then `npm test` on the module.
Not at the moment, we are trying to move away from all custom scripts
@TheLarkInn can you not just change your test script in package.json from ` "test": "mocha test/*.test.js --harmony --check-leaks",` ==> `"test": "npm link && npm link webpack && mocha test/*.test.js --harmony...