Feature Request: Docker installation
I would love to give self-hosted etesync a try, but I'd like to run it on docker. I'm willing to try this on my own, but I will probably need some time to get it right... I will share my results once I have a working solution!
Hello @renatobellotti, maybe you want to check the image mentioned at https://github.com/etesync/server/issues/12#issuecomment-625870434. I don't use Docker myself though.
I would love to give self-hosted etesync a try, but I'd like to run it on docker. I'm willing to try this on my own, but I will probably need some time to get it right... I will share my results once I have a working solution!
Please try my image at https://hub.docker.com/r/victorrds/etesync, and tell me if it work for you, and post any issues on my repo
Awesome! Any idea when we can see 2.0 builds?
@victor-rds do you intend to update your images to 2.0?
@worldofgeese Yes, just hadn't the time for doing it.
The new Etesync 2.0 or Etebase has database breaking changes, @tasn and I talked a bit about what is the best course, just updating my images :latest tag can break it to almost everyone who uses it, so I got 2 options go and break or create a new separate image and add a deprecation notice.
As I mentioned it, I really dislike creating a new image as it'll lose all of the "reputation" (stars/downloads), which means people won't know it's the preferred choice.
How about adding a tag legacy and then just update latest? This way, if people update without looking they'll have an easy way to revert back to a working version, while not losing all of the reputation this repo already got.
It's just a renaming because the repository already has versioned tags like v0.3.0
Ah right, there are already tags! So I see no problem with just having latest point to 2.0...
@tasn does the etesync/etesync-web project contains the migration tool? Or is just the client for 2.0?
Only the old etesync-web (so the legacy branch), so not the client for 2.0. https://client.etesync.com/migrate-v2/ Or you can migrate directly from the Android app.
@tast Is there an easy way to test if the database is from Etesync 1.0?
I'm changing the entrypoint script for etebase, and I want to avoid damaging the original database.
o support both databases (sqlite and postgres) I'm using python manage.py inspectdb journal_journal to find if the database is from etesync and not etebase..
Your check sounds to me as good as in. I don't have a better idea.
I don't know if this worth open a new issue, but the new etebase uses a ini config file, the location of this file is defined on the etebase_server/settings.py, could we set the etebase-server.ini via environment variable?
Here is the rationale: the docker swarm offers a secrets feature, mounting sensitive files to /run/secrets/<secret_name>, and the ini files can contain sensitive data, like database passwords. We already use the secret.txt in this way, also other orchestration tools like kubernetes has it's on secrets vaults, using and env var solve for all of them.
Sure thing, got an environment variable name in mind? Happy to do it.
What I do locally is slightly different: I just symlink the path to the secrets path in my Dockerfile. So I'd have ln -s /run/secrets/etebase_ini /app/etebase-server.ini. Though I guess it's much easier to not hardcode it and just use an env var. :P
It's not an option for my case (because I do it for the etebase_local_settings.py file which needs to have a specific location anyway).
Sure this works, but I'm trying to keep it generic for many use cases as possible.
Another advantage, for me at least, is migrating all settings and db files to a single directory /data, and at same time avoiding touching the base dir with a symlink.
Maybe ETEBASE_INI_PATH to maintain consistency with ETEBASE_DB_PATH
config_locations = [os.environ.get('ETEBASE_INI_PATH', os.path.join(BASE_DIR, 'etebase-server.ini')), '/etc/etebase-server/etebase-server.ini']
I can make a pull request if you prefer
I replied to the PR! Looks great overall, just made two small comments, and after that I can merge it in. Thanks!
Btw, could you please tag the latest legacy release with the legacy tag (maybe also add a mention in the README), just so people know what's up.
Btw, could you please tag the latest legacy release with the
legacytag (maybe also add a mention in the README), just so people know what's up.
Already done ten days ago, I just didn't update the README: victorrds/etesync:legacy
Ahh, great job! :)
Probably a bit too late to chime in on this, but wouldn't it be far simpler to create a completely new docker image for EteBase? That way you don't need to worry about complex configurations, migrations and clashes, you could focus on making them clean and separate.
Hell, all you'd need to do is clone the existing docker image to a new repo, rename everything to EteBase, add any needed changes to the docker image, point to the latest EteBase version with the latest tag and publish to Docker.
Then the end user could download the separate docker image, spin up the new EteBase docker container alongside the legacy EteSync, fire up one of the migration tools with the 2 separate servers, migrate their data then shut down the old EteSync docker container. Easier for you guys and the end users. You guys can keep things clean and simple without worrying about compatibility or checking which version of the database is there. End users can keep using the legacy images as long as they need and nothing will break, and needing to spin up a separate docker container for EteBase means they have to take deliberate steps to migrate to the new architecture which conveys the backwards incompatible nature of this new release.
@BeatLink, we already considered that, though I think the current approach is more favourable for two reasons:
- There are already 100k+ downloads and a few stars on docker hub. This there implies this is the correct and trusted image to use, and lends a lot of credibility to both the docker image in particular and the project in general. Losing this would be unfortunate.
- There are sometimes incompatible software versions (when major versions change), and this one is no different. It's the same way you have Python2 and Python3 docker images. And Postgres 12 and 13. This case is not different.
Hmm, I understand. In any case, I'm working on a separate image right now. I've invited you to the Github organization and the Dockerhub organization as well.
How does one migrate databases between 1.x and 2 given they simply flip the tag to latest or are already running latest?
On November 7, 2020 5:02:23 PM GMT+01:00, Tom Hacohen [email protected] wrote:
@BeatLink, we already considered that, though I think the current approach is more favourable for two reasons:
- There are already 100k+ downloads and a few stars on docker hub. This there implies this is the correct and trusted image to use, and lends a lot of credibility to both the docker image in particular and the project in general. Losing this would be unfortunate.
- There are sometimes incompatible software versions (when major versions change), and this one is no different. It's the same way you have Python2 and Python3 docker images. And Postgres 12 and 13. This case is not different.
https://github.com/etesync/server/blob/master/README.md#updating-from-version-050-or-before
You would have to use a migration tool. Since you cant use the same database its likely you would have to spin up a new docker container as well. This is why I think it's best to start with separate docker images. Because its entirely incompatible. Combining the two into a single docker image with only tags to separate them is just begging for builds to break, especially when several self hosters have scripts to automatically update their images to the latest build, which wouldn't take into account the breaking changes.
Even the above mentioned link states you have to use seperate install paths since the databases are incompatible
@BeatLink, as I said though, it's exactly like Postgres. You need to stop the server, migrate to a new one, and then run the new image.
Okay, I see. So I'll just go ahead and build an omnibus docker image for my personal use
What do you mean by omnibus?
Anyhow, it's the last time for the foreseeable future that this kind of breakage will happen. So after this is done, we are clear. :)
Im creating a personal image that rolls the server, web client and dav adapter all into one, and I'll use Nginx as the frontend server and proxy between the 3. So accessing my server on the etesync port will show you nginx with the main page being the webclient. the /admin page will take you to the server admin and /dav will take you to the dav adapter