deploy error: accepts at most 1 arg(s), received 3 - docker login ?
Hi Jonathan:
When deploying with ssocketcluster deploy I get an error after image is built. it fails at 'docker login' apparently:
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:145631044140023b405ff54e34796fb85a2a12f35ccad22 0.0s
=> => naming to docker.io/ciliconherve/signal-skytalk:v1.0.0 0.0s
accepts at most 1 arg(s), received 3
[Error] Failed to deploy the 'signal-skytalk' app. Command failed: docker login -u "...myid" -p "...mypassword"; docker push
ciliconherve/signal-skytalk:v1.0.0
If I run the docker login command by itself, it succeeds (with a warning:
docker login -u "...myid" -p "...mypassword"
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
PROBLEM:
I never had this issue before (we have used socketcluster for last one year - works great by the way !) and I searched online and do not seem to find where this come from.
Also I am able to run these 2 commands manually:
docker login -u "...myid" -p "...mypassword"
docker push ciliconherve/signal-skytalk:v1.0.0
ALTERNATIVE FIX:
Would you have the list of commands to perform 'socketcluster deploy' manually so that I can perform the initial deploy manually and hopefully the 'socketcluster update' will work ?
@hervejegou Strange. I have not made any changes to the deployment process in a long time. Maybe this issue is caused by an update to kubectl or docker.
If doing it manually, there are two ways to deploy:
1. Deploy the source code in an app-src-container, then let Kubernetes attach it to the scc-worker container
This is how the socketcluster deploy command works. It takes up the least amount of space and bandwidth since it only pushes the source code and its npm dependencies in its own container instead of the entire Node.js + SC environment to DockerHub. But it's trickier to perform since the container names have to match up so that Kubernetes can automatically attach the source code container to the scc-worker container (the source container is an init container).
You should read the code here: https://github.com/SocketCluster/socketcluster/blob/7b1d97cbcd9d37ae51eb9305bbaedfac774622b8/bin/cli.js#L496-L752 to see what steps are executed for the deploy. It's not very complex but the logic here looks messy because it was written a long time ago before async/await (you kind of have to read the code backwards to follow the callbacks ;p).
2. Build a custom scc-worker from scratch with your source code built in
It will still work the same way architecturally but in this case it takes more space since the entire Node.js environment is inside the container (not just your source code) and it has to be entirely rebuilt and pushed even if you do a 1-line change. That said this should be a lot easier to do manually.
In this case, you can simply customize scc-worker-deployment.yaml to point to your container image and make sure that you use the same environment variables so that your containers will be exposed correctly to the cluster.
I have the same issue. I am running on windows. If i paste the command in powershell it will run. It seems like maybe its something in the execSync function with the chained commands that may have changed.
seems like this fix to the cli.js seems to work. basically splitting up the login and push.
execSync(`docker build -t ${dockerConfig.imageName} .`, {stdio: 'inherit'});
execSync(`${dockerLoginCommand}`, {stdio: 'inherit'});
execSync(`docker push ${dockerConfig.imageName}`, {stdio: 'inherit'});
@jondubois We have moved to a deployment using GitHub actions. So every PR triggers a test image build and if it passes, the PR can be merged and on merge (or on any direct commit) we have a GitHub action that deploys the update to our K8 cluster. If that is of interest, I can ask our developer to write a quick guide on that. So we are all good for now. Thanks.
@hervejegou Glad you found a solution. A guide would be great. If you share the link with me, I can add it to the socketcluster.io website.
We have a pending PR which should address the issue with the CLI. I just haven't had the time to test it yet.