Add proper dry_run for OpenR
Thanks for this! We'll eventually work on the dry_run if that ever creates an issue in the future.
Originally posted by @oliviertilmans in https://github.com/cnp3/ipmininet/pull/18#issuecomment-483574363
The dry_run for OpenR currently only calls the openr daemon with --version since it doesn't quit after parsing the config.
I was looking at the code and it is not as simple as implementing it within the openrd dry_run method. My idea was piping OpenR and wait for the output Starting OpenR daemon.
openrd --foo="bar" [...] --dryrun | grep -m1 "Starting OpenR daemon."
I realized that piping is not possible with the current implementation of pexec in the router.
Maybe an expect script would be a viable solution. However, this would introduce a new dependency. Any thoughts how to implement a proper dry_run for OpenR?
What you likely want is to launch openrd using popen instead of pexec.
One way to do that would be to rework how we use dry_run such that:
- The logic using
dry_runis abstracted, i.e., extractedfrom __router.py to a new method in Daemon such asdo_config_check()
def do_config_check(self):
out, err, code = self._node.pexec(shlex.split(self.dry_run))
if code:
lg.error(d.NAME, 'configuration check failed [ rcode:', str(code), ']\nstdout:',
str(out), '\nstderr:', str(err))
return code
-
OpenrDaemonwould then override that new method à-la:
import fcntl
import os
import time
# [...]
@property
def dry_run(self):
raise TypeError('OpenrDaemon requires to use its custom do_config_check')
def do_config_check(self):
p = self._node.popen('...')
# read p.stdout or p.stderr here and match on your target string
# naive example below
fcntl.fcntl(p.stdout.fileno(), fcntl.F_SETFL,
fcntl.fcntl(proc.stdout.fileno(), fcntl.F_GETFL) | os.O_NONBLOCK)
output = ''
start -= time.clock()
while True:
output += p.stdout.read(64)
if 'some_string' in output:
break
if start + time.clock() > 5:
p.terminate()
raise RuntimeError('OpenrDaemon took more than 5s to check the config')
time.sleep(.1)
p.terminate()
return perform_more_sanity_checks()
The downside of this is that there would not be any integration with the current process management system that tries to ensure that all processes are cleaned up before the network is destroyed. Maybe the daemons should hold a reference to the router __processes instance instead of the node? (I cannot remember if this is already the case)
Ok, this is exactly the direction I was thinking of. Maybe a decorator comes in handy. I need to study the cleanup code more. Thx for the guidance.