Not a tty
Default shells seem to be limited. For example, the following step:
- run: |
ps
echo "$(tty)"
produces:
PID TTY TIME CMD
2587 ? 00:00:02 Runner.Listener
2611 ? 00:00:02 Runner.Worker
2763 ? 00:00:00 bash
2777 ? 00:00:00 ps
not a tty
As a result, multiple tools that can provide pretty coloured logs do not work as expected. This is the case of e.g. colorama or pytest in the Python ecosystem.
A possible workaround is to use docker run --rm -t .... However, this involves installing in the container multiple resources/tools that are already available on the host. Furthermore, I don't know if windows containers are supported on windows-latest jobs.
I tried setting shell: bash -i -l {0}, but I get:
bash: cannot set terminal process group (1291): Inappropriate ioctl for device
bash: no job control in this shell
I tried python -c 'import pty; pty.spawn("/bin/sh")' too, but the job wil run for more than 10min with no output.
Which is the appropriate syntax to get a TTY?
Thank you @1138-4EB, but we'd like to keep issues related to code in this repository.
- If you have questions about writing workflows or action files, then please visit the GitHub Community Forum's Actions Board
- If you are having an issue or question about GitHub Actions then please contact customer support
@andymckay, this issue is related to almost, if not all, the code in this repository. For example:
- https://github.com/actions/starter-workflows/blob/master/ci/python-app.yml
- https://github.com/actions/starter-workflows/blob/master/ci/python-package.yml
- https://github.com/actions/starter-workflows/blob/master/ci/python-publish.yml
I too am interested in this.
So, I tried using the shell: option, as in:
But got the following:
@ioquatix, note that the messages in lines 4 and 5 are warnings. I don't know if those are related to the error in line 6. See for example https://github.com/ghdl/ghdl/runs/298621638#step:5:14
Ahh, interesting.
So maybe it can work? Why does it get permission denied when trying to run that script? Presumably it's {0} being substituted, but it's not executable??
The execution might continue after those warning, although not providing a tty. Anyway, I have not been able to achieve using both shell and run. See actions/starter-workflows#95.
Yeah, I've come to the same conclusion, no matter what I do I can't figure out how to get it to allocate a TTY/PTY that actually works.
@1138-4EB did you find any solution at all?
No. TBH, I was quite surprised about how this issue was handled. Alternatives so far are to manually force each tool (if options are provided) or to use docker containers (thus, making GitHub environments and actions irrelevant).
I've emailed github support.. but that's a black hole... let's see if anything comes of it.
What are you trying to do with this? Is it just color codes? (Im sure there are other implications of being a tty) As far as Im aware, the web console will render ANSI color codes correctly, however there is of course the issue of: how is a tool, in general, supposed to know that? I dont really know enough to say for sure, but it seems like allocating a tty is not 100% correct- does that assume it is a 2 way input/output? Obviously you cant type into the web console, so I dont know how close we can get to saying the web console is a true tty, and thus implying we support the features of a tty (input, maybe more?) I hesitate to say we should force downstream tools (anything the runner runs during job execution, ie your workflow) to think they have a tty. I recall this impacting things like installers which may throw a prompt up and wait for input, and similar issues.
I do know: We set $TERM=dumb so that tools ran here dont try to do anything fancy. After all, all we are doing is buffering the stdout of the programs you run, and streaming them line by line to the web browser. That being said, I did mention the web console will render ansi colors. How can we configure the environment to indicate that to downstream processes? Ive played around with setting TERM differently, read about terminfo, etc, but honestly havent fully wrapped my head around it yet
I guess in general, my stance is that the web console is not a terminal and shouldnt pretend to be, although we do in a sense, because we render ansi colors. And my question to that is, is there a granular way we can surface that capability (ansi colors, only) to downstream tools? Via environment or other?
@ioquatix To answer something you brought up, in the shell: option, {0} is the filename (not contents) of the temp file that the run: string was written to, per https://help.github.com/en/actions/automating-your-workflow-with-github-actions/workflow-syntax-for-github-actions#custom-shell
@dakale An example use case of TTY environment variable is gpg-agent, which is used to sign files on GitHub Actions. If TTY is broken, GPG2, installed by default, can't communicate with gpg-agent well.
To resolve this, we needed to force using GPG1, which doesn't depend on TTY by default. @olafurpg created a GitHub action task to do so: https://github.com/olafurpg/setup-gpg
Signing files with GPG is required for uploading artifacts to the Maven central repository. Hopefully, we can use gpg2 (installed by default) in GH action without such a complex setup.
Hi @dakale! Thanks a lot for giving this a thought!
What are you trying to do with this? Is it just color codes? (Im sure there are other implications of being a tty) As far as Im aware, the web console will render ANSI color codes correctly, however there is of course the issue of: how is a tool, in general, supposed to know that?
Yes, the main issue I found is a difference between executing "test runners" locally or on GHA. Some of these tools are pytest, tox, colorama, yarn, grunt, (rich)go... The point it that each of those uses a different mechanism to force colors when a tty is not available. Therefore, although possible, it is a maintenance nightmare. I'm afraid that there is no resource to tell all of them at once, except providing a good enough tty.
I dont really know enough to say for sure, but it seems like allocating a tty is not 100% correct- does that assume it is a 2 way input/output? Obviously you cant type into the web console, so I dont know how close we can get to saying the web console is a true tty, and thus implying we support the features of a tty (input, maybe more?) I hesitate to say we should force downstream tools (anything the runner runs during job execution, ie your workflow) to think they have a tty.
IMHO, you are mixing concepts here. On the one hand, TTY stands for TeleTYpewriter, which in modern days, is interpreted as a text-only terminal, as opposed to a GUI. On the other hand, a terminal, be it text-only or a GUI, can be input-only, output-only, or both of them. The normal approach is for it to be input/output locally and output-only on CI environments.
I don't think it is harmful at all to let downstream tools know/think that they are using a tty for output only, because a proper terminal is used almost always (all suggested cmd, powershell and bash terminals are proper tty tools). Furthermore, most of the jobs that are executed on CI should be in "batch mode", even if an input terminal was shown. This is true for local execution too.
I recall this impacting things like installers which may throw a prompt up and wait for input, and similar issues.
Most installers allow to run them in batch mode. If they don't, and they need the environment to be forced, that's an issue to be solved by the maintainers of the installers.
Please, note that any limitation you might think about with this regard is applicable almost directly to docker containers. I mean that this has already being addressed and solved. Precisely, if the same tasks are executed in containers running on GHA, a proper tty (output-only) is provided. This is to say that any user can potentially replace run: ... with run: docker run --rm -tv $(pwd):/src -w /src ubuntu:19.04 ...., which is a tty with no stdin. It is also possible to execute a container with stdin but without a tty: https://stackoverflow.com/questions/35459652/when-would-i-use-interactive-without-tty-in-a-docker-container.
I do know: We set $TERM=dumb so that tools ran here dont try to do anything fancy. (...) How can we configure the environment to indicate that to downstream processes? Ive played around with setting TERM differently, read about terminfo, etc, but honestly havent fully wrapped my head around it yet
I think that any solution which makes it optional, such as providing a (limited) tty and using TERM to tell that it is not a regular "xterm", is better than forcing a solution. If a tty is provided, users can disable it. However, now it is not available, so it's not possible to enable it.
EDIT
@xerial, precisely, this issue is related to actions/starter-workflows#96, because GPG seems to be required for using a docker credential helper.
What are you trying to do with this? Is it just color codes?
I'm trying to use tcsetpgrp and tcgetpgrp which require a TTY. They have nothing to do with color codes.
(Im sure there are other implications of being a tty) As far as Im aware, the web console will render ANSI color codes correctly, however there is of course the issue of: how is a tool, in general, supposed to know that?
It should be attached to a TTY istty() is true and TERM environment variable needs to be xterm256 or something similar.
I dont really know enough to say for sure, but it seems like allocating a tty is not 100% correct- does that assume it is a 2 way input/output?
No, if you want you can close stdin of a TTY. It doesn't imply there is input or that a user is present with a keyboard.
Obviously you cant type into the web console, so I dont know how close we can get to saying the web console is a true tty, and thus implying we support the features of a tty (input, maybe more?
Well, aside from the fact you CAN type into a web console if it's set up for it, for the automated background testing, you can either close stdin or let it hang (maybe desirable behaviour, it's what travis does).
One additional note here, is travis allows you to log into a build to debug it, and it's already a TTY so easy to do so. If you ever adopt this functionality in the future, you'll probably desire to have a TTY allocated.
I hesitate to say we should force downstream tools (anything the runner runs during job execution, ie your workflow) to think they have a tty. I recall this impacting things like installers which may throw a prompt up and wait for input, and similar issues.
Whatever choice you are forcing tools into some specific situation. That being said, most tools are designed to be run within a TTY.
I do know: We set $TERM=dumb so that tools ran here dont try to do anything fancy. After all, all we are doing is buffering the stdout of the programs you run, and streaming them line by line to the web browser.
If you use a pipe for STDOUT, it will potentially buffer a large amount, but if you use a TTY the buffering strategy is different and typically more fine grained.
That being said, I did mention the web console will render ansi colors. How can we configure the environment to indicate that to downstream processes? Ive played around with setting TERM differently, read about terminfo, etc, but honestly havent fully wrapped my head around it yet.
Typically you need to allocate a TTY and ensure TERM=xterm256.
I guess in general, my stance is that the web console is not a terminal and shouldnt pretend to be, although we do in a sense, because we render ansi colors.
As stated by @1138-4EB a "web console"/"terminal" and a "TTY" are entirely different things.
And my question to that is, is there a granular way we can surface that capability (ansi colors, only) to downstream tools? Via environment or other?
To be honest, I don't really care about colours, but that I need a TTY allocated so I can test process management functionality, including tcsetpgrp and tcgetpgrp.
@1138-4EB @ioquatix If you need to spawn processes with PTYs, you can check out https://www.pyinvoke.org/. I've been using it for a couple months on GitLab CI/CD (which behaves very similarly to GitHub Actions in this case).
Here's a little sample (the test task is of particular interest here): https://github.com/claudeleveille/asgard/blob/447bc6b5dda36c030053d24ec3ff26f43b8316ce/tasks.py. You'll notice in this pipeline log that pytest's outout is colorized (because of the pty=True in the tasks.py).
@claudeleveille, it seems that you are running all the steps inside docker containers (python:3.7.4-slim-buster). As a result, you are not using the pty/tty (not) available on the host. Although the use case might seem similar, I believe that your setup could be easily fixed adding options: -t to the jobs.
Summarizing:
- To execute all the steps in the same docker container, and have a pty/tty, use
container: <image>andoptions: -t. - To execute some of the steps in (probably different) docker containers, and have a pty/tty for those steps only, use
docker run --rm -t. - It is not possible to execute steps on the host (
ubuntu-latest,windows-latest,macos-latest), and have a pty/tty.
This issue is requesting support for the last use case.
Using pyinvoke in the workflow file might be a suitable workaround, should it be installed by default. If users need to execute action setup-python and explicitly install it, I don't think it is useful. On the one hand, the execution of the action might conflict with users that are actually testing Python projects. On the other hand, users without knowledge about Python should not be forced to learn how to install a package.
Nevertheless, I will try pyinvoke. Should it work as expected on the three platforms, it might be worth requesting to have it installed by default in the environments. @dakale, wdyt?
@claudeleveille, I run some further tests: https://github.com/1138-4EB/actions/tree/color
-
container: non-coloured output: https://github.com/1138-4EB/actions/runs/324389230#step:4:35 -
container_t: non-coloured output: https://github.com/1138-4EB/actions/runs/324389161#step:4:35. Hence, I was wrong above. Addingoptions: -thas no effect, because GHA seems to first create the container and then attach/exec. The option is used for creation only. -
docker: coloured output: https://github.com/1138-4EB/actions/runs/324389199#step:3:62. Note that scripts don't need to be modified, and additional envvars don't need to be set. -
docker_shell: coloured output: https://github.com/1138-4EB/actions/runs/324389256#step:4:63. This is equivalent to the previous one, but the hackish procedure described in actions/toolkit#232 is adapted in order to use multi-linerunfields. -
host: non-coloured output: https://github.com/1138-4EB/actions/runs/324389350#step:4:41. This the default set up that any new user would use. -
invoke: this is really strange. 30min ago, it worked and it provided coloured output: https://github.com/1138-4EB/actions/runs/324359942#step:4:44. But now it is failing: https://github.com/1138-4EB/actions/runs/324389276#step:4:35 (You indicated pty=True, but your platform doesn't support the 'pty' module!). -
invoke_runpy: this is equivalent to the previous one, but a helper file is used. It worked too (https://github.com/1138-4EB/actions/runs/324359920#step:4:44), but it fails now: https://github.com/1138-4EB/actions/runs/324389330#step:4:35 -
invoke_ptypy: this is an attempt to use the procedure described in actions/toolkit#232 along with the previous approach. It fails because it cannot find the module installed through pip: https://github.com/1138-4EB/actions/runs/324434912#step:5:10 -
invoke_ptysh: this is similar to the previous one, but an intermediate shell script is used to ensure that invoke is installed with the same version of python that is then used to callpty.py. Anyway, it fails asinvokeandinvoke_runpy: https://github.com/1138-4EB/actions/runs/324434941#step:5:12
Should invoke be consistent, it would be an interesting alternative to docker and docker_shell. Furthermore, it might be a good built-in solution which is not enabled by default (which was @dakale's concern). Unfortunately, I don't understand why some executions work and others don't. Is it because of the underlying machine where those jobs are scheduled?
(Thanks @joshmgross)
This issue is better suited to live here
Ill reopen since we havent made a decision on if its something we want to implement or not
This is certainly worth digging into and figuring out.
FYI as a workaround you can use script -e -c $YOUR_USUAL_COMMAND_HERE, but this is obviously a bit of a hack.
@dalehamel, that sounds interesting. Can you please elaborate? Is it possible to use it as a shell?
steps:
- uses: actions/checkout@v1
- shell: script -e -c {0}
run: |
pip3 install tox --progress-bar off
tox -e py38-test
Can you please elaborate?
From the manpage:
script makes a typescript of everything displayed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later with lpr(1).
So seems like some OG Unix command lol at the "students" part... I'd never heard of it until I tried porting some tests that need a TTY to github actions. I don't care about actually saving the typescript file, but it seems to be able to fake(?) there being a TTY well enough for things that expect stdout to be a TTY to actually work.
Is it possible to use it as a shell?
I did a test with shell: script -e -c /bin/bash -c {0} to explicitly use bash, which is what I was already doing inline (didn't know about this shell keyword - cool!), unfortunately this didn't work:
sh: 1: /home/runner/work/_temp/245cfd66-ef64-4af9-be61-ff8b520db04b: Permission denied
IDK why, haven't debugged further. I was just wrapping my other commands in it and those worked fine.
@bryanmacfarlane any update here?
No update. This is currently fairly low in the actions backlog priority. It's not in the plans for the next quarter.
Do you accept a PR to make it a tty?
Jest doesn't use colors too
For jest you can use the env var FORCE_COLORS=true or --colors
For jest you can use the env var FORCE_COLORS=true or --colors
It is FORCE_COLOR=1 (note it's not plural), and it comes from chalk.
I feel like every tool adding a FORCE_COLOR=1 or --use-colors argument is simply the wrong solution to this problem. I'm disappointed that this is not higher on the agenda. The "collateral damage" is non-trivial, in the sense that this behaviour is forcing a lot of 3rd party tools to implement functionality just to work around it.