commitizen icon indicating copy to clipboard operation
commitizen copied to clipboard

fix: add description for subcommands

Open marcosdotme opened this issue 1 year ago • 2 comments

Description

Checklist

  • [x] Add test cases to all the changes you introduce
  • [x] Run ./scripts/format and ./scripts/test locally to ensure this change passes linter check and test
  • [x] Test the changes on the local machine manually
  • [ ] Update the documentation for the changes (need remake a bunch of gifs/images @Lee-W)

Expected behavior

You must see the command description when running with --help option

Steps to Test This Pull Request

  1. cz commit --help

Screenshot

image

marcosdotme avatar May 16 '24 19:05 marcosdotme

@Lee-W I delete the tests/commands/test_other_commands.py an move the tests for the correspondent file without asking you. Let me know if you need that I revert or squash this.

marcosdotme avatar May 16 '24 19:05 marcosdotme

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 97.54%. Comparing base (120d514) to head (bbf19ad). Report is 327 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1120      +/-   ##
==========================================
+ Coverage   97.33%   97.54%   +0.20%     
==========================================
  Files          42       55      +13     
  Lines        2104     2486     +382     
==========================================
+ Hits         2048     2425     +377     
- Misses         56       61       +5     
Flag Coverage Δ
unittests 97.54% <100.00%> (+0.20%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar May 16 '24 19:05 codecov[bot]

@Lee-W Can you help me? Dont know whats happening here lol. All works fine on my local branch, all tests passes:

image

But it still failing on CI

marcosdotme avatar May 20 '24 18:05 marcosdotme

@Lee-W Can you help me? Dont know whats happening here lol. All works fine on my local branch, all tests passes:

image

But it still failing on CI

Sure, let me take a look 🙂

Lee-W avatar May 20 '24 18:05 Lee-W

I guess it's due to the latest main branch change. could you please run poetry run pytest --force-regen and see whether it fixes the CI? If not, I'll take a deeper look

Lee-W avatar May 20 '24 18:05 Lee-W

I guess it's due to the latest main branch change. could you please run poetry run pytest --force-regen and see whether it fixes the CI? If not, I'll take a deeper look

Tried poetry run pytest --force-regen but it fails and I think is because the pytes-xdist plugin:

image

So I try to put --force-regen directly on .scripts/test:

image

But nothing changed:

image

marcosdotme avatar May 20 '24 18:05 marcosdotme

got it. let me take a look

Lee-W avatar May 20 '24 19:05 Lee-W

I know what's happening. The message of argparse is different in different Python versions.

Lee-W avatar May 20 '24 19:05 Lee-W

Changed in Python 3.10. We could probably generate two sets of expected results and run that test based on different Python versions. We could try https://docs.pytest.org/en/latest/how-to/skipping.html#id1. Not sure whether there's something like parameterize we can use in this case. @marcosdotme do you want me to resolve it? or do you want to take a look?

Lee-W avatar May 20 '24 19:05 Lee-W

Changed in Python 3.10. We could probably generate two sets of expected results and run that test based on different Python versions. We could try https://docs.pytest.org/en/latest/how-to/skipping.html#id1. Not sure whether there's something like parameterize we can use in this case. @marcosdotme do you want me to resolve it? or do you want to take a look?

Can you resolve this one? So I can learn from watching this time

marcosdotme avatar May 20 '24 19:05 marcosdotme

Changed in Python 3.10. We could probably generate two sets of expected results and run that test based on different Python versions. We could try https://docs.pytest.org/en/latest/how-to/skipping.html#id1. Not sure whether there's something like parameterize we can use in this case. @marcosdotme do you want me to resolve it? or do you want to take a look?

Can you resolve this one? So I can learn from watching this time

Sure 🙂

Lee-W avatar May 20 '24 19:05 Lee-W

@marcosdotme Done. I decide not to cover < 3.10 as this is not important enough to check

Lee-W avatar May 20 '24 19:05 Lee-W

@marcosdotme Done. I decide not to cover < 3.10 as this is not important enough to check

Ok! Thanks 🤝🏻

marcosdotme avatar May 20 '24 20:05 marcosdotme

@woile @noirbizarre I'm planning on merging it these days. Please let me know if you'd like to take a look. 🙂

Lee-W avatar May 20 '24 21:05 Lee-W