Brain-dump while at the airport...
- Layered architecture:
- One should be able to write "low level" OpenGL code without the need to use the full-fledged rendering and scene management classes but directly using the low level classes to handle camera, glsl, gl context and functions etc. Ex. one should be able to translate the OpenGL SuperBible examples or the NeHe OpenGL tutorials using VL utility classes fairly easily.
- Is it possible to abstract away OpenGL to the point of making it possible to have a Vulkan "driver"? Not super urgent, mainly an architectural exercise, since I don't think OpenGL will disappear anytime soon.
- New vl::OpenGLContext:
- UI events should be moved away into a generic vl::UIEventEmitter emitting events to vl::UIEventLister object
- vl::OpenGLContext should not be a vl::UIEventEmitter, instead it should have a vl::UIEventEmitter.
- Focus on tracking OpenGL states
- Move OpenGL states lazy update logic away from the renderers in here (partially already done)
- Use more composition and less derivation: example, UI wrappers like vl::Qt5Widget shouldn't derive from vl::OpenGLContext instead it should have a vl::OpenGLContext and a vl::UIEventEmitter - it's more flexible and it also helps with method clashing.
- Better encapsulation of OpenGL functions and defines, usinig a vl::GLFunctions class.
- Core-first architecture: drop compatibility support and expand support for Core profile use cases with more ready to use GLSL shaders and utilities.
- All the above would make a mini WebGL version of VL relatively easy to implement.
Ref:
- https://developer.nvidia.com/gameworks-vulkan-and-opengl-samples
- http://www.openglsuperbible.com/example-code/
Hi Mic
It's been a while since I haven't worked with VL directly (only using my s/w embedding VL), but re-visiting the site these days I gladly notice that you are back with interesting ideas. I just wanted to post this saying that I do agree 101.5% with your proposals :)
-
From the early stages of developing with VL, I felt the need that OGL functions should be contained in one place only, instead being scattered over various locations. I would not mention the main advantages as you clearly agree. Just one thing: far easier to track the code and see what/how is doing...
-
Low level architecture: oh, yeah! I was posting ~ 1.5 y ago an example of my work using VL for intensive math GPGPU computation in X-Ray interaction with human body. Boy, I wished I had such a feature in that time's VL, rather than using scene management in such thing. In particular VL's GLSL system, some rendering states (e.g. blending), the (maybe simpler?) texture interface, DrawCall hierarchy, framebuffer region interface and, of course, an OpenGL functions center as a thin layer on top of OGL API (hiding a version-dependent implementation of a specific feature, as you nicely do in some render states). And not to forget the vlcore's maths.
-
Re-designing the OpenGLContext was a MUST! Indeed, this has nothing to do with UI dispatching. In fact, bear in mind that many developers may wish to integrate VL as a part in a larger (opengl-based) application, where windowing and opengl context(s) are already present. So they may only invite VL to render in a pre-prepared viewport, somewhere, without the need to create its own context, etc. For example, I use GWEN (opengl) GUI which creates everything, but I need VL to render on an allocated region of app's canvas. Therefore, I see the OpenGLContext as a convenience tool offered to users, rather than a mandatory dependency (or a wrapper of a more general canvas viewport?). Just my opinion....
-
UI wrappers: couldn't agree more. They should contain the ogl context and a UIEvent dispatcher, not being one. And see above: VL should not depend on these. Well, I'm not saying it does, but it still needs an OpenGLContext derivation in order to work, isn't it?.
That's being said, looking forward to using VL in the new format - both screen and GPGPU uses.
All the best, Virgiliu
Thanks @VirgiliuC :)