Able to log in, but POST updates to syncing-server are returning 500
Hello,
I am running Standard Notes behind an nginx reverse proxy -> Standard Notes server.
I am able to authenticate and pull in notes from my Standard Notes server to the client, but about 80% of the time, it is unable to update those notes back in the server. Sometimes it works, sometimes it doesn't. The result is conflict merge issues, with notes just continuing to duplicate on my client.
The access logs for Nginx reverse proxy show 200's and then sometimes 500's, which correspond with sync failures. Error logs show nothing at all.
Any ideas?
Here's the access log sample:
xxx.xxx.xxx.xxx - - [23/Jul/2020:18:21:51 +0000] "POST /items/sync HTTP/1.1" 200 3689 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) StandardNotes/3.4.1 Chrome/80.0.3987.158 Electron/8.2.0 Safari/537.36"
xxx.xxx.xxx.xxx - - [23/Jul/2020:18:22:09 +0000] "POST /items/sync HTTP/1.1" 500 654 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) StandardNotes/3.4.1 Chrome/80.0.3987.158 Electron/8.2.0 Safari/537.36"
Reverse proxy nginx config:
server {
listen 443 ssl;
listen [::]:443;
server_name xxx;
ssl on;
ssl_certificate /etc/letsencrypt/live/xxx/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx/privkey.pem;
location / {
proxy_pass http://10.0.0.100;
}
}
server {
listen 80;
listen [::]:80;
server_name xxx;
return 302 https://$server_name$request_uri;
}
Standard Notes server systemctl service file:
[Unit]
Description=Standard Notes
Requires=network.target
[Service]
Type=simple
WorkingDirectory=/home/xxx/standardnotes/syncing-server
ExecStart=/usr/local/bin/bundle exec rails server -e production -b 0.0.0.0 -p 80
TimeoutSec=30
RestartSec=15s
Restart=always
[Install]
WantedBy=multi-user.target
Standard Notes syncing server logs are showing this:
2020-07-23 18:23:12.594 [WARN ] Can't verify CSRF token authenticity. (pid:4904)
2020-07-23 18:23:21.108 [INFO ] Completed 200 OK in 9050ms (Views: 2.1ms | ActiveRecord: 10.7ms) (pid:4904)
2020-07-23 18:23:30.286 [INFO ] Started POST "/items/sync" for **FILTERED** at 2020-07-23 18:23:30 +0000 (pid:4904)
2020-07-23 18:23:30.288 [INFO ] Processing by Api::ItemsController#sync as */* (pid:4904)
2020-07-23 18:23:30.289 [INFO ] Parameters: {REDACTED} (pid:4904)
Could you please post the complete log from the syncing-server?
I added logs just now, do you mean more than the above?
Yes. Ideally, the logs should include the 500 error messages.
Yes. Ideally, the logs should include the
500error messages.
2020-07-23 18:26:48.398 [INFO ] Completed 500 Internal Server Error in 9039ms (ActiveRecord: 7.6ms) (pid:4904)
2020-07-23 18:26:48.400 [FATAL] (pid:4904)
2020-07-23 18:26:48.401 [FATAL] Aws::Errors::MissingRegionError (missing region; use :region option or export region name to ENV['AWS_REGION']): (pid:4904)
2020-07-23 18:26:48.401 [FATAL] (pid:4904)
2020-07-23 18:26:48.401 [FATAL] app/models/item.rb:74:in `cleanup_excessive_revisions'
Not sure why it is saying AWS? This is locally hosted. I am using the production flag though, if that helps.
EDIT: Going back to the development tag fixes the issue. Not sure why.
2020-07-23 18:26:48.401 [FATAL] app/models/item.rb:74:in `cleanup_excessive_revisions'
This method should not exist in master branch. So you might be on develop branch?
Yep, that's the issue. Any reason why develop is your default branch?
@karolsojko cleanup_excessive_revisions should do checking to see if AWS enabled. We should generally optionalize any AWS behavior so users don't experience issues.
Application's code layer should be agnostic of infrastructure and cleanup of excessive revisions is. Not sure what you had in mind with checking if it's enabled - aren't other *.perform_later crucial to the app logic?
I think we need to update the docs that the app uses shoryuken which utilizes AWS SQS in production mode. Or provide an alternative to run jobs inline by default? What do you think?
Yep just some solution to AWS dependence. Users shouldn't need to use AWS to self-host. Inline jobs could be a good solution.