openff-toolkit icon indicating copy to clipboard operation
openff-toolkit copied to clipboard

merge common toolkit tests

Open adalke opened this issue 4 years ago • 2 comments

The openff/toolkit/tests/test_toolkits.py has a number of tests which should be equivalent across all toolkits (except for some I/O values, like the specific details of SMILES canonicalization).

This test code is repeated for the different toolkits, which means they may diverge over time. It also means that a test might be added to one toolkit but not added to the other.

I propose having a base class with the shared tests, perhaps something like:

class SharedToolkitTests:
   def _test_smiles(self, smiles):
        # Ensure round-tripping of canonical SMILES is canonical. 
        toolkit_wrapper = self.toolkit_wrapper()
        molecule = Molecule.from_smiles(smiles, toolkit_registry=toolkit_wrapper)
        # When making a molecule from SMILES, partial charges should be initialized to None
        assert molecule.partial_charges is None
        smiles2 = molecule.to_smiles(toolkit_registry=toolkit_wrapper)
        # print(smiles, smiles2)
        assert smiles == smiles2

where the toolkit tests would be implemented something like:

class OpenEyeSharedTests(SharedToolkitTests):
  wrapper = OpenEyeToolkitWrapper

  def test_smiles(self):
    super()._test_smiles("[H]C([H])([H])C([H])([H])[H]")

class RDKitSharedTests(SharedToolkitTests):
  wrapper = RDKitToolkitWrapper

  def test_smiles(self):
    super()._test_smiles("[H][C]([H])([H])[C]([H])([H])[H]")

There could also be test stub in the base class to ensure that a derived class must implement or silence a given test, and some way to tell pytest that that stub should not be tested.

adalke avatar May 27 '21 15:05 adalke

This test code is repeated for the different toolkits, which means they may diverge over time. It also means that a test might be added to one toolkit but not added to the other.

Completely agree. This is a big risk that we take with the current design.

I really like this idea, though I'm uncertain about how it will work with with our use of the pytest.mark.parameterize and @requires_openeye decorators. Could you put together a proof-of-concept PR where you convert a few (~3?) existing OE+RDK tests to use this pattern?

j-wags avatar May 28 '21 02:05 j-wags

It's easier and I think more illustrative to create a simple test file with special-built functions than to extract existing OE+RDK tests and make a PR.

Take a look at this gist - https://gist.github.com/adalke/991440fe6b431d1171b60de06bccd5d0 .

I also tested it with an invalid OELICENSE to check the license check decorators.

adalke avatar May 28 '21 09:05 adalke