Access environment w/o vision?
Hello! We're very interested in the general idea of acting in virtual environments, but not very interested in trying to solve vision, or dealing with the ambiguity of vision extractions. I was wondering: is there any way to access the environment, bypassing the vision component and just getting the gold-standard recognition results?
Hi Graham, thanks for your interest in CHALET. The current CHALET version does not support symbolic environments. Is there any specific form of the input that you want?
We can generate an output that contains the name of the current room and a list of 3D position, 3D rotation (Euler's angle) and state (discrete state e.g., the drawer can be opened or closed, television can be turned on or off) for every object in the current room. We can restrict these objects to be the one visible in the robot's current frame or all the objects in the current room. This list would be communicated over Socket for each frame and could be read by any agent that is written in any programming language. We have used python successfully in the past but you could use another language.
It will take me 2-3 weeks to add this functionality and release a new version (I am currently on the job market and snowed under). I am also happy to work with your student to help them add this functionality and test the current version to see if it fits your goal.
What do you think?
Hi Dipendra,
Thanks a lot! We're not in any hurry whatsoever, I was just interested in the feasibility of this. Please don't go out of your way to do it for us, as we're not sure whether we'd end up using it, but if you think it'd be nice to have for other people as well, we'd be happy to know when it's implemented.
It is definitely feasible and I believe other researchers will benefit from it as well including us. In the past, we have performed experiments using CHALET but haven't ablated the results to exclude vision failures and I think it is a good idea to do so.
I'll add this modification on my list for the next release. I'll update this thread when it is out. Thanks.