Splunk attempts to upgrade from 7.0.0 to 7.0.0
Steps to reproduce:
- Start a splunk/splunk:7.0.0 container using host-mounted volumes for /opt/splunk/etc as described in the documentation (
docker run -p 8000:8000 -d -e "SPLUNK_START_ARGS=--accept-license" v /local/path/optsplunketc:/opt/splunk/etc -v /local/path/optsplunkvar:/opt/splunk/var splunk/splunk:7.0.0) - Once the container starts successfully stop it.
- Start another container from the same splunk/splunk:7.0.0 image (
docker run -p 8000:8000 -d -e "SPLUNK_START_ARGS=--accept-license" v /local/path/optsplunketc:/opt/splunk/etc -v /local/path/optsplunkvar:/opt/splunk/var splunk/splunk:7.0.0) This container will fail with exit code 1 and the logs will read:
This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. tcgetattr: Inappropriate ioctl for device WARNING: error changing terminal modes - password will echo! Perform migration and upgrade without previewing configuration changes? [y/n]
If you add --answer-yes to SPLUNK_START_ARGS this container will start but it still goes through the "update" process. I'm not sure if this is intended or unintended behavior but I feel like it should at least be documented because it's not clear from the documentation that starting a new container with those volumes mounted will not work. This is an important use case for persisting settings for a server and one of the big benefits of running splunk in docker.
I am running into this same issue. Any help would be appreciated!
Hey @bretthamilton, were you able to use the workaround of adding the start args @raab70 mentioned above?
Will check test this further.
It's because /opt/splunk/ftr needs to be removed after first initialization, which isn't currently being done. So every time the container restarts, /opt/splunk/ftr is recovered because it's baked into the container. Maybe @halr9000 or one of the other maintainers can add a patch into entrypoint.sh
Hi, I could handle this problem when I don't have persistent volume. Does anyone know how to solve this?. Or at least how could I do to add a patch like @foxx said.
please solve this problem, also facing into it.
Let's revisit this with the next Splunk release. Some changes were made to first-time run behavior, and I know they had to touch the Dockerfile, so this issue might go away.
(No, I can't commit to when this release will happen. Sit tight!)
Feels that the problem is with the share folder, possible it will require the same as etc folder, looks like migration keeps some state here. So additional volume needs to be created, and backup of the share folder copied on the first start.
With Splunk 7.1.0 you will see behavior that integrity checks fail for installed files.
The /etc/splunk/ftr mechanism is the same in 7.1.0, and you still have this problem in 7.1.0
Workaround
Here's a snippet from my docker-compose.yml file to fix this
Updated: @boojew pointed out I missed including my SPLUNK_START_ARGS
command: |
bash -c "
if [ -e /opt/splunk/etc/str ]; then
rm -f /opt/splunk/ftr
exec /sbin/entrypoint.sh start-service
else
touch /opt/splunk/etc/str
exec /sbin/entrypoint.sh start-service --seed-passwd changeme
fi
"
environment:
# bug https://github.com/splunk/docker-splunk/issues/59
- SPLUNK_START_ARGS=--accept-license --answer-yes
# Also suggested, set
# - SPLUNK_ENABLE_LISTEN=9997
# - SPLUNK_ADD=tcp 1514
# - SPLUNK_ADD_1=monitor '/var/log/*' -sourcetype linux_logs -index yourindex
# - SERVER_NAME=your.server.name
volumes:
- splunk_etc:/opt/splunk/etc
- splunk_var:/opt/splunk/var
- splunk_share:/opt/splunk/share/splunk/search_mrsparkle/modules
If you don't add the splunk_share volume, the webserver fails to come up the second time.
The -f in rm is important, when the same container is restarted.
Note: I don't actually use the --answer-yes flag, even when I upgraded from 7.1.0 to 7.1.2
A Suggested Fix
Basically add the workaround to the entrypoint
7.0.3 and still happening
The
/etc/splunk/ftrmechanism is the same in 7.1.0, and you still have this problem in 7.1.0Workaround
Here's a snippet from my
docker-compose.ymlfile to fix thiscommand: | bash -c " if [ -e /opt/splunk/etc/str ]; then rm /opt/splunk/ftr else touch /opt/splunk/etc/str fi exec /sbin/entrypoint.sh start-service " volumes: - splunk_etc:/opt/splunk/etc - splunk_var:/opt/splunk/var - splunk_share:/opt/splunk/share/splunk/search_mrsparkle/modulesIf you don't add the
splunk_sharevolume, the webserver fails to come up the second time.A Suggested Fix
Basically add the workaround to the entrypoint
Using this, the startup process continues, but splunkd never starts and attaches :(
I found another workaround - change the environment variables:
- "SPLUNK_START_ARGS=--accept-license --seed-passwd
--answer-yes"
Still happens on 7.1.0 and 7.1.2