Refactor network code
This commit moves all the platform specific network code inside two new source files "src_sys/socket.h" and "src_sys/socket.cpp". It introduces two classes :
- TcpService : an abstraction for a TCP server socket
- TcpConnection : an abstraction for a TCP "communication" socket
It is tailored to NASSP and provides only the bare minimum required for the current needs.
Tested using the LMTelemetryClient2 client to "download" telemetry from the LEM and uplink DSKY commands. I'm not savvy enough with the MFDs to correctly validate the CSM side of things tough.
You're using WIN32 specific functions in your new class. Maybe it's an idea to make this a base class and subclass a WIN32 and (in the future) Linux network implementation? Then we pick a subclass based on the platform NASSP is compiled for.
You're using WIN32 specific functions in your new class. Maybe it's an idea to make this a base class and subclass a WIN32 and (in the future) Linux network implementation? Then we pick a subclass based on the platform NASSP is compiled for.
I try to stay away from OO garbage when possible. Inheritance in not required here since the choice would be known at compile time (I also did it "pimpl" style as a test but it was way overkill IMHO). Winsock API is close enough to POSIX that a few #ifdefs should be enough to support both.
I think I can still factorise some error handling but this is trivial. The important thing is the interface.
Hum, interestingly I did a profiling run and with time acceleration, the accept call is one of the most time consuming (up to 30% at 100x). Adding a basic rate limiter seems to help a lot. Is it something worth investigating?
accept() is for looking for new TCP connections and therefore should be called at a real-time-relative rate, not a simulation-time-relative rate.
Telemetry should be sent at simulation-time-relative rate though - If you change that, it will change the HBR/LBR data frame rate and break what I am working on now.
On Feb 5, 2023, at 9:55 AM, TheGondos @.***> wrote:
Hum, interestingly I did a profiling run and with time acceleration, the accept call is one of the most time consuming (up to 30% at 100x). Adding a basic rate limiter seems to help a lot. Is it something worth investigating?
— Reply to this email directly, view it on GitHub https://github.com/orbiternassp/NASSP/pull/902#issuecomment-1418076485, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADGYAVBPZJVPT5RJ2SEVUT3WV7EN3ANCNFSM6AAAAAAUI56GRU. You are receiving this because you are subscribed to this thread.
accept() is for looking for new TCP connections and therefore should be called at a real-time-relative rate, not a simulation-time-relative rate. Telemetry should be sent at simulation-time-relative rate though - If you change that, it will change the HBR/LBR data frame rate and break what I am working on now. […]
accept is called in perform_io (i.e. in sim-time) and so quite a lot when time acceleration is on. I will do some more profiling and see if it is worth the hassle. Don't worry, I just rate-limited the accept syscall, not the actual simulation (and it won't be done this way if we do it at all, more likely with a threaded socket server doing a blocking accept).
Edit : Some more testing using the "Apollo 11 - 06 - Before MCC-2" scenario in external view @2776AU to limit GPU impact :
- upstream version : 245fps at 1x and 10x, slideshow (<1fps) at 100x time acceleration
- rate limiting hack : 245fps at 1x, 10x and 100x
Given the crazy oscillations I get at 100x, I don't think it will be a valid usecase for a while. I think I will leave it as-is for the time being.