Allow more APIs for CodeQuestion
(Hi, long time no see.)
I wish to added more APIs for CodeQuestions, so that some simple tests could be done which is currently not possible for relate AFAIK. For example, testing if a specifc function was called with specific arguments (#381 ) .
Another use case is to test scripts (not a function) submit by a user. For example, I need to test if input was used as expected in a script. The following was the user_code:
name = input("please input your name")
print(”Hello %s!" % name)
My workround is to turn user_code into the following before sent to request_run:
def __func():
name = input("please input your name")
print(”Hello %s!" % name)
and then I can use unittest to check whether it really works, using something like this using mock. That also requires the __func namespace being added to the run_req as a runpy context so that __func can be called in test_code.
To summarize, I need to refactor the grade method so that more possibilities of subclass of CodeQuestion or PythonCodeQuestion are possible.
May I write a PR to do that? Thanks.
Good to see you again @dzhuang!
If I understand correctly, what you're requesting can (mostly) already be done, as Relate provides the source code from the participant to the grading code (as a string) in user_code. One particular tweak that might be needed is to introduce a knob that prevents Relate from trying to run that code. (So that it can be run only under the control of the grading code.) I'm open to consider that change. This should enforce that names_{for,from}_user is empty.
Thank you for your response and sorry for my late response (17 days passed :<). I'll try to submit a PR when I'm ready.