Put on napari hub
https://www.napari-hub.org/
Discussed previously here: https://forum.image.sc/t/publish-napari-plugin-with-conda-dependencies/65653
Summarizing:
- We need to release micro-sam on PyPI. Then napari-hub will automatically scrape PyPI and will find micro-sam, because we already have the
Framework :: napariclassifier. - Because micro-sam requires some dependencies only available on conda (not pypi), we will...
- Not worry about the fact it will not install everything correctly if users try
pip install micro-sam(this is not great, but there is no good alternative if we want micro-sam discoverable on the napari-hub) - Add a line to the documentation (maybe around here) to tell users please do not use pip to install micro-sam. Hopefully anyone with installation problems will see this page and then try again with conda or from source.
- Not worry about the fact it will not install everything correctly if users try
Thanks for looking into this @GenevieveBuckley. Let's go ahead and put a "dummy" micro_sam package on pip then so that it can be installed via napari.
Would you have time to look into this? (There's no hurry!)
Thanks for looking into this @GenevieveBuckley. Let's go ahead and put a "dummy" micro_sam package on pip then so that it can be installed via napari.
Would you have time to look into this? (There's no hurry!)
I've made a PR here: https://github.com/computational-cell-analytics/micro-sam/pull/606
I have made a new release now, but the deploy part of the action that would push the package to PyPI was skipped, see https://github.com/computational-cell-analytics/micro-sam/actions/runs/9222705341.
I can't see why in the log; any idea why this may happen @GenevieveBuckley?
I have made a new release now, but the
deploypart of the action that would push the package to PyPI was skipped, see https://github.com/computational-cell-analytics/micro-sam/actions/runs/9222705341.I can't see why in the log; any idea why this may happen @GenevieveBuckley?
There are two conditions for the deploy job:
- The previous
update_release_draftjob in therelease_drafterworkflow completed successfully, and - If there is a tag
if: contains(github.ref, 'tags')
I was assuming that the deploy job would see that the first job created a new tag, but perhaps this is not how things work. Maybe because that tag is being created from inside the update_release_draft job?
I guess the main question is: how does the current release drafter work? Should the deploy job be turned into a step that runs after the others (possibly conditional on the results from a previous step?) Is there ever a case where a release could be drafted, but we don't want anything uploaded to pypi?