/me tests compiling Qt 5.15 from scratch, and so far looking good.
/me rediscovers alacritty and thinks it might be a viable embedded terminal solution!
Stumbled on this, in the Qt terminal department: https://github.com/jsbronder/sff
/me debates between the dreary utility of trying to properly fix the WiX msi installer's startup dirs and diving back into the src/other superbuild thorn pile...
@starseeker thanks, that's interesting. do you have any indication that it actually works on Windows though? It looks like it uses libvterm which, I may be mistaken, doesn't appear to be for Windows. I looked at the source code and it doesn't look like libvterm uses pty, so that's promising, but still left wondering what it does.
ah, maybe it is still using pty somewhere. looks like it: https://github.com/akermu/emacs-libvterm/issues/12
that's what makes alacritty interesting. it has a similar virtual terminal interface, but is using an abstraction underneath to establish a pseudoterminal interface on windows.
I mean, it'd be quite a feat of engineering given it's another language stack, but it's one of only three possibilities I've found so far.
Not sure of the details, but libvterm is used by neovim, which I know does work on Windows: https://github.com/neovim/neovim
it's used by emacs-vterm too, but the specifically say it doesn't work on windows
which is frankly what I would expect.
I'd have to dig into what neovim is doing. It sure felt like they were terminal emulating when I last ran it...
only know of a handful of libs that extract an interface that includes windows support, two being in Node's ecosystem
Maybe related: https://github.com/equalsraf/libvterm-win/commit/f876a3379bcd80e467d770ca1dd74448b60fe40c
were you able to run a command-line command in neovim?
that's cool if it was just the escape code being wrong
saw others were able to get it to compile
No, that I didn't try - I was referring more to the look/feel of the neovim editing window and interactions.
heh, oh
yeah, I mean emacs runs fine on windows too :P
Fair enough - I seldom run external commands from an editor, so it's not something I would miss
there's even a separate module to kick off a command shell -- and something like that is always an option
/me experiments...
i.e., just maintain two separate interfaces -- I've seen a lot of projects do that. pty for non-windows and msys for windows
if libvterm abstracts the line interpreter (usually the pty bit coming from the system), then they could avoid needing a pty interface altogether. I've been hoping to find some solution like that as we don't actually need kernelspace interaction
heck, we could even get away without a terminal (this is what mged does)
https://github.com/neovim/libvterm is the place to check
but that means interactive commands are no good. that's why mged does all it's "MORE_ARGS" funniness
that's their fork - if anyone's got that working, it's probably them
I read through that, unclear
I'm updating the git conversion right now, I'll flip OSes once that's done and try neovim on Windows. Anything specific I should try?
it's whether it's abstracted enough of the i/o interpretation for the tty driver
try running an interactive command from within vim, like mged
er, mged -c
make sure it works in cmd.exe first ;)
can start with something simple non-interactive like "time"
but the real test of a vterm is an interactive command
https://github.com/neovim/neovim/pull/7007
Looks like they're using winpty
Cool, that's great to know too, and will undoubtedly be easier to maintain than alacritty then
maybe
looks like winpty relies on msys :/
looking more and more like going the git-bash route is going to be unavoidable.
looks like alacritty can be built to use winpty or conpty (https://devblogs.microsoft.com/commandline/windows-command-line-introducing-the-windows-pseudo-console-conpty/)
cool, looks like conpty doesn't use msys or cygwin, so just need to find that code and extract it or find someone else that wraps them with unixpty, like these guys: https://github.com/kafeg/ptyqt
that actually might be exactly what I was looking for .... huh
hot diggity
/me mentally shelves this until after the announcement and github ...
/me ponders whether the libged subprocess mechanisms should be generalized to allow using QProcess, as well as libbu's subprocess...
That doesn't make a lot of sense to me. It's make more sense to make it an implementation option in libbu.
I'll study it some more... I'm hoping QProcess might offer some advantages integrating the libged command I/O into Qt gui elements, but as far as I can tell you can't create a QProcess using an already-launched program, so I'm not sure how that would/could work with a libbu API...
I don't understand ... QProcess invokes a binary and sets up I/O handlers. That's pretty much the scope of what bu_process_exec() and the other API in process.c are doing too.
Making Qt a library requirement is a big architectural decision that really shouldn't be made lightly.
and an even bigger one if it's not encapsulated behind API... that would go against our current architecture as an antipattern
Getting it to work underneath bu_process_*() would likely expose a variety of limitations / assumptions in that code now. That would be very useful to eliminate from the API, a really good thing, and keep it encapsulated (== more maintainable).
@Sean My hope (although I don't know yet) is that QProcess will have signal/slot integrations with Qt's subsystems that offer advantages. I agree it shouldn't be exposed publicly in the APIs. My query was whether a Qt based application could somehow be set up to fire libged commands that are launched as subcommands (currently rt and friends, but eventually most or all of them) using QProcess, rather than the implementations currently underneath the libbu subprocess code. It could very well be that there aren't any advantages, or at least not compelling ones - I just don't know yet.
It'll be a while before it comes up, anyway - currently I'm getting my tail kicked around the keyboard trying to figure out how to get OpenGL rendering to work in a QThread to not block the main app.
I wasn't referring to being publicly exposed in the API. That's a WHOLE other level of antipattern and wouldn't be compatible with libged public API anyways.
It's about it being encapsulated or not. If it's directly called by a library, then that becomes a coupling. If the coupling is in all call sites, then it's a tight coupling that is more costly to replace than if it were encapsulated (i.e., light coupling).
That said, there are a couple ways to avoid tight coupling and still use it in libged. There's putting it behind libbu API, or we could add a delegate mechanism where the calling application code passes in a handler that uses QProcess to invoke.
Woo-hoo! Still in its rudimentary beginnings, but first correct display of a libdm generated wireframe in a Qt supplied OpenGL using the new libdm setup and qtgl backend: libdm_and_QOpenGLWidget.png
That's great!
Early beginnings of decoding MGED's mouse interactions (wow - modes upon modes upon modes. gnarly...)
Qt_libdm_mouse_rot.gif
/me groans... and so far, my machine is the only one where this works.
That's the OGL life... Want me to test it?
Not yet - I'm still working on getting it to build on the Mac without manual hackery
All the work we did to get our CMake logic to detect the X11 OpenGL is cutting against me now.
More progress... additional_mouse_motions.gif
@Sean It's still pretty rudimentary, but I did get it to build on OSX if you wanted to give it a go. For Mac it needs both -DBRLCAD_ENABLE_QT=ON and -DQt5_DIR=/path/to/dir/with/cmake/config/file
Is this available for linux?
And how to enable it?
I'm developing with it on Linux, but it's still way too rudimentary for me to recommend trying it - it's just a testbed right now for figuring out what needs to be done to make libdm and Qt work together. I know of at least one major change I need to make, and likely more.
Oh
I would like to try it out when it is available
Ping me up after this is out of beta state:sweat_smile:
@Sumagna Das out of beta state will be a long while. If you want to check it out it's located in src/isst/dmtest and will compile to produce the qdm executable with -DBRLCAD_ENABLE_QT=ON passed to CMake. Just remember it's an experiment in combining libdm and Qt at this point, so nothing much is going to work...
Oh
I will compile again with that option as this seems cool to me.
Axes with labels: axes_with_labels.png
More view elements: qdm_faceplate.gif
What are the things present in this experimental setup at the moment?
It might be helpful if you keep a list here or somewhere (atleast for me:sweat_smile:)
I tested out and it was pretty good
So far, all I've got is basic view manipulation (rotate, translate, scale) and some of the faceplate elements (the last video). I've had pretty lousy luck with using a separate rendering thread (didn't work at all on the Mac, and even on Linux the faceplate matrices mess up the display - the fixes for that issue in the single threaded version didn't seem to work for the separate thread.)
I'll be looking at some more of the Archer/libtclcad manipulations next - both view element manipulation and sketch editing are going to require more than just the current view operations, and I need to figure out how how that's going to work.
I'm concerned the hashing trick I'm using to avoid having to list out which commands might alter the geometry won't scale - testing with the Stanford Lucy model is not encouraging. It works, but it's slow for all commands, not just view updates.
/me adds a timer
Growl. Yep, hash calculations for Lucy are too slow for this purpose. Time to invent plan B...
View responds to attribute color setting now: qdm_edit_response.gif
Oof. First successful view polygon drawing: first_polygon_circle.png
More progress with circle creation: circle_as_vlist_obj.gif
3D line drawing: 3d_data_lines.png
Line with arrow: arrow.png
Bit of a detour: here's a demonstration of using bundled OSMesa software rendering to display the same OpenGL drawing calls used by the qtgl backend. My Linux box manifested some sort of OpenGL problem, which was annoying but also a perfect testing environment to get this working: Qt_with_software_rasterizer_fallback.gif
The initial messages are the important ones - the first lines are the QOpenGLWidget failing to set up properly, and then our message indicating we're falling back on software rasterizing. The view axes show that we have not just the wireframes but also fontstash text rendering with this backend. In principle (unlike the X backend) this is a fully portable solution to fallback rendering. As an added bonus, most of the drawing code it uses is the same code as our other backends.
OK. For anyone who may have tried the software backend, they probably saw the following if they resized the window: resize_problem.gif
This turned out to be a mismatch between Qt's QImage and the OpenGL data. I've figured out a working path, so the resize behavior should now look like this: resize_fixed.gif
(The software rendering can be explicitly triggered by launching with the "-s" option to qdm, for those that have working OpenGL.)
For grins, here is gsh running swrast without any on-screen graphics at all but still supporting screengrab: swrast_command_line_only.gif
starseeker said:
Bit of a detour: here's a demonstration of using bundled OSMesa software rendering to display the same OpenGL drawing calls used by the qtgl backend. My Linux box manifested some sort of OpenGL problem, which was annoying but also a perfect testing
Thats awesome! Any metrics on the performance impact? There is a faceplate FPS meter you may be able to turn on to show what the rate is during rotation.
I think the FPS meter may be MGED only right now (it's on my list to check)
OK, got a FPS meter working. It's slow, no question.
Dragon is about 85fps with qtgl, and with osmesa it's a little over 7fps
dragon_accelerated.gif dragon_swrast.gif
The X wireframe drawing is considerably faster - on the order of 40fps
For me this is acceptable (though certainly not ideal) since the intent of swrast is as a last-resort fallback rather than a primary operating mode. Under normal circumstances the expectation would be that the accelerated context is available.
It may be possible to translate the X wireframe logic to a generic, platform agnostic equivalent of swrast, but it will still lack the shading capabilities (however slow) that swrast offers.
A possible hybrid approach would be to treat such a platform agnostic X drawing routine the same way we look at the TIE raytracing - have it render to a buffer, then composite the buffer in OpenGL to produce the final drawing. For wireframe only drawing we could do that, and then fall back to the slower all-OpenGL mode for more complex scenes.
Yeah, cool. Just good to know what that difference is at.
Both seem acceptable for those target situations.
Part of the issue is undoubtedly how old this OSMesa is - it is extracted from 7.0.4, which is the last version of the software rasterizer that didn't start depending on more complex substrates for better performance. I know the modern Mesa software rasterizers are way way faster, but they're also a lot more complex.
Hey @Sean , does the X wireframe drawing do any depth sorting?
It must be somewhere (or maybe X itself is...)
Nevermind. Not 100% sure how fast an "extracted" X wireframe render would be anyway, since I'm not sure if X has some sort of graphics driver level routines its talking to...
First drawing of a general polygon contour: first_general_polygon.png
Closing the polygon: closed_polygon.gif
@starseeker I think it does (otherwise Z-clipping wouldn't do anything)...but I don't know for sure.
Selecting and moving points: show_selected_point.gif
First demonstration of boolean operations with new setup: first_successful_booleans.gif
Interior fill: first_successfull_polygon_fill.png
More control of filling, combined with interior polygon hole: line_fill_with_hole_and_controls.gif
Draw fill before main polygon: improved_drawing_ordering.png
First text label not part of faceplate: first_text_label.png
This is kinda fun - label with line: first_working_label_with_line.gif
First successful data axes: first_data_axes.png
Test of using view objs for primitive labeling: first_primitive_labels.png
This won't look like much to the casual eye, but it has both matrix application and correct colors using the new drawing setup (labels are just on for testing purposes): first_correct_colors_new_draw.png
This is a side effect, but got hidden line mode working in MGED: hidden_line_mged.png
No colors yet, but first shaded view of moss with new drawing setup: first_shaded_moss.png
There we go: moss_shaded_w_color.gif
Nice progress! That's some serious effort going on there.
Hidden line mode in qdm: moss_hidden_line_qdm.gif
bigE mode as draw2 -m3: rook_bigE.gif
starseeker said:
bigE mode as draw2 -m3: rook_bigE.gif
the first drawing of the rook looked like a wormhole to me :sweat_smile:
Heh. rook makes a great example of the benefits of bigE mode - large subtractions and intersections
The E command isn't working in MGED right now for some reason, but a quick look indicated the vlists were still being created (i.e. the problem is strictly on the display end) so it was worth refactoring the logic to work with draw2.
Still got some more options to work through (transparency, subtraction suppression, etc. - I'm not adding the NMG debugging draw options, those belong elsewhere). Also need to look at the drawing outputs produced by nirt, rtcheck, gqa, etc.
Can anyone test transparency (draw -m1 -x 0.7 all) on the share/db/faa/Generic_Twin.g example model? I can't get either latest main or the 7.30.10 release to draw with any sort of shading, and I'm wondering if my local OpenGL is messed up again...
Doh. Nevermind - need to enable the transparency variable in the dm
FAA Generic twin with transparency: faa_transparent.gif
And the same thing using the software rasterizing backend: faa_transparent_swrast.gif
Got the solid-lines-only and subtraction suppression going: rook_nodash_nosubtract.gif
It would be interesting to go half way. draw dashed subtractions, but clip to the positive bounding box so it doesn't distort the entire view.
Color overriding and line width adjustment: color_line-width.gif
Early stages yet, but working on getting the adaptive plotting up and running: adaptive_first_steps.png
Working LoD on zoom in qdm: moss_lod_on_zoom.gif
Option to skip autoview when drawing to empty scene: no_autoview.gif
Faceplate grid (Archer style): faceplate_grid.gif
Confirmed rect command can create the interactive rectangle: interactive_rect.png
Mutter... still using the old pixel approach instead of a texture, but working qtgl framebuffer display: qtgl_fb_display.png
Embedded fb image display: qtgl_embedded_fb.png
Working overlay/underlay (not an actual in-app raytrace, yet): fb_overlay_underlay.gif
Potentially useful resource:
Tcl and the Qt Event Loop
https://www.tcl.tk/community/tcl2019/assets/talk160/Paper.pdf
First successful display of text output from rt: rt_output.gif
Better integration (using the callback like libtclcad now): rt_output_improved.gif
First successful generation of an rtcheck visual: rtcheck_first_success.gif
Improvements to Qt console behavior: improved_console.gif
@Sean Do you know if libpkg can communicate using stdin/stdout between two processes, as well as TCP sockets?
@starseeker yes, the code for it is there for some platforms (and that is very much a platform-specific thing), but I'm not sure how much it's exercised or what tools even make use of it.
see all the references to un.h and UNIX domain sockets -- that's the IPC mechanism you're referring to for interprocess communication.
Well... I was just thinking about launching a subprocess, sending pkg contents down stdin to the other program, and getting its responses back by listening to its stdout.
INET TCP sockets are what are more frequently created and utilized by calling apps like rt, but in theory it should chatter over a UNIX port.
Maybe that's a crazy idea...
okay, yeah -- I would expect something like that to "probably" work.
at least on linux
I'd expect windows would need some modifications because it probably assumes a tcp socket in some places .. but don't know, that's not something we've ever tested.
/me nods. OK, mostly I'm just being lazy and hoping I could use the same mechanism I'm already using for the framebuffer I/O - but if it's not something that has been tried with the code I should probably just go with what's there.
Only way to find out is to try : )
try a little demo main() that chats with itself
Bob fell back on Tcl channels for Windows, with a cryptic note that pkg_permserver "beget other mods" when trying to use it there. Maybe it's time to revisit that and see if it still applies - I know I fixed at least one issue in libpkg on Windows a while back.
I don't think you'd even end up calling that
permserver I think specifically kicks off a tcp listener. you'd be using the calls that come after that (I think).
Not for stdin/stdout, true - I've been going through the fbserv code in libtclcad since the most "straightforward" path is to translate the Tcl listening logic into Qt terms.
Unfortunately, so far at least, there's nothing straightforward about any of this... I suppose mostly because I have ziltch experience with async I/O between programs.
all the more reason to try a demo. implementing a chat program with a given interface is like towers of hanoi, a rite of passage in CS curricula.
After a week, @starseeker finally gets an image to render (still in the wrong place) in the embedded fb via Qt's Tcp logic.
/me is beginning to wonder how many straightjackets are typically necessary for this sort of programming... it's almost of a piece with the joys of standing up OpenGL rendering.
Progress to date: first_embedded_fb_progress.gif
Finally...: embedded_rt_interactive_update.gif
Cool beans. Hard to get them to align?
Not too bad, once I found the right foo in the RtControl.tcl file and translated it to ert. The HARD parts were the Tcp+libpkg communication, signals/slots async updating, and getting all the OpenGL ducks in a row to draw correctly.
I've still got some work to do for the swrast version, and will need to take a file to the rough edges in a number of places, but that's a pretty fundamental "yes this really can work" milestone.
There we go - embedded fb display and updating with swrast: embedded_rt_interactive_swrast.gif
First step towards quad view: first_quad_layout_test.png
Per view drawing: quad_independent.png
Quad view with initial aet set and updating all views in response to change: quad_view_init_views_and_updating.gif
Quad view awareness of which view has had the most recent mouse click: quad_view_selection_update.gif
rtcheck in quadview: rtcheck_quadview.gif
Would be nice to provide some visual indication for which quad view currently has 'focus' for when running commands like autoview and such that apply to only one view. Maybe a hairline 1px highlight border.
I was thinking either that or changing the background color in the non-selected views. Prefer the border if I can figure out how to get that to work (the spacers I'm currently using don't accept a color setting as far as I can tell, which would have been the easiest thing - means I'll have to get a little more custom.)
Can't you just draw a 1px overlay? You had the logic for that working.
same as faceplate
Hmm... possibly
First concrete progress towards getting qdm work stood up inside qged: qged_qdm_first_merge.gif
I got a compilation error with Qt 5.15.2 in file src/libqtcad/QtCADTree.cpp at line 1079: QString::SkipEmptyParts is deprecated. The new constant (since Qt 5.14) is Qt::SkipEmptyParts. I don't know, how usual Qt 5.14 or higher is. Debian buster (stable) uses Qt 5.11, but I have to modify the sorces for this version anyway because of Qt::ScrollMomentum, introduced in Qt 5.12.
@Daniel Rossberg I'll defer to @Sean on whether we want to conditionalize the code for various version of Qt - I suppose the "correct" thing to do would be to configure time test for various features, but I've not yet tried setting up those sorts of checks in a configure step.
I think until there is a baseline release, we should target the latest Qt.
If something isn't working or is deprecated in the latest Qt 6, I think we'd be much better off fixing it as soon as possible instead of locking into an older Qt prematurely.
The only reason to ignore the deprecation warnings would be in code that we're potentially not going to use.
Figures... in the end, my efforts on the Accordion widget weren't needed - if I rethink how I'm organizing the info/control widgets a bit, QDockWidget itself can do the job even better: docked_palettes.gif
looking very "right"! can the console be moved to a side? can the h-order be changed by the user? (ultrawide monitors kinda change the ... geometry of the situation.)
Console can be detached, but it's bottom only at the moment. Not sure what you mean by h-order?
The ordering in the column can be changed (attributes at top, view controls at bottom, etc.)
If it makes sense to allow the console to be side docked that's an easy setting change in the code - trying to trade off flexibility vs. too much opportunity for users to do unexpected things.
h-order like moving the geometry window to the right side, or middle, or move the view window to the right, or ...
flexibility vs 'user-proofing', I guess
screens are getting wider faster than taller, top and bottom realestate are precious, so we end up with a bunch of columns
There we go - highlighting restored with new dockable palettes: qged_restored_highlighting.png
Man page viewing with brlman, now with Qt! qged_brlman.gif
Interactive updating of info from the view: qged_view_info.gif
is this going to be the new user interface of mged or archer after its complete?
That's the long term plan, yes.
no
i am asking which one is going to have this UI
MGED or archer?
Oh. I suspect the first to be replaced will be Archer, since it's less mature.
The plan is to eventually replace both - I may try making an MGED style interface using a toolkit other than Qt to make sure I'm not baking any Qt-isms in too deep, but there's not much point in that until the Qt pieces are in place.
oh
Or even better if someone else wants to do an MGED style interface ;-)
View info with quad view: qged_view_info_quad.gif
Well, my event filter trick for the tree is a problem, but leaving that aside for the moment here's GUI based on/off control for faceplate elements: qged_faceplate_ctrls.gif
There we go - view manipulation and tree highlighting: qged_view_panel_with_highlighting.gif
First polygon creation in QGED with the mouse: qged_first_polygons.gif
Support for creating multiple polygon shape types: qged_multiple_polygon_shape_support.gif
Some progress on manipulating polygons: qged_polygon_progress.gif
Application of boolean ops to polygons: qged_polygon_booleans.gif
super awesome... so the 'csg' (poly bool ops, I guess) is applied immediately and the stored result is always explicit?
Yes - it's the clipper boolean outputs, and they'll replace the original polygons.
Hey @starseeker ... any chance you can try to stash the result into a sketch object?? That could be the start of an outstanding replacement for the sketch editor, or even just as a debug facility to stash the results into the .g
Yes, Bob had logic for that and I've got it integrated at lower levels. Not exposed in the GUI yet - need to get things like color and fill going first - but it's definitely on the list. (Once I have that working, I'll shift the tools from the view to the object edit palette.)
To do a proper sketch editor with it, we'd have to associate non-polygon sketch segments with the polygon segments in such a way that they survive boolean ops, so we can turn around and use the resulting polygons to construct (say) an arc from a circle if the circle got chomped during the boolean.
I think I saw something in the development version of clipper that might be useful for the bookkeeping part of that, but I've not tried it as I didn't want to toss an unfinished clipper into the mix at this stage.
Setting colors on polygons: qged_boolean_colors.gif
More editing progress for booleans: qged_polygon_editing_progress.gif
starseeker said:
To do a proper sketch editor with it, we'd have to associate non-polygon sketch segments with the polygon segments in such a way that they survive boolean ops, so we can turn around and use the resulting polygons to construct (say) an arc from a circle if the circle got chomped during the boolean.
Maybe, but even if it turned it into all polylines, I think that'd be just fine. I mean, that's worlds better than the current options.
The only question youd have to sort out for revolve and extrude, I think, is where the principal axis resides (e.g., always at [minX,minY]).
Would be interesting to figure out how to extend sketch with color attributes, so those can be stored in objects too.
starseeker said:
More editing progress for booleans: qged_polygon_editing_progress.gif
These animations are making me interested on your work. If you arrange a virtual meeting (with screenshare etc.) to present your work, I would be interested in joining.
However I am not sure whether this is a good idea, or whether other people would be interested in joining too.
Myself I am working on improving the documentation system, and I would like to make a presentation of my work by the end, when most of it is done. Assuming that people are interested in joining it.
Thanks! It'll probably be a while before it's presented anywhere, but I appreciate the interest.
Here's the creation of sketches from polygons hooked up to the gui: qged_create_poly_sketches.gif
Note in particular the appearance of the new sketches in the tree widget on the left as they are created, as well as the update due to renaming.
And restoring a polygon from a saved sketch: qged_restore_polygon_from_sketch.gif
Able to set vZ now to shift the polygon planes up and down: qged_polygons_vZ.gif
Slightly more intuitive way to set vZ - specify a 3D point. Example showing adding a circle through the center of a torus:
qged_setting_vZ.gif
starseeker said:
And restoring a polygon from a saved sketch: qged_restore_polygon_from_sketch.gif
Er, terminology check -- is that actually a sketch?? Looks like a serialized view object, not a sketch object.
It's a sketch object with metadata for colors and fill state stashed in attributes.
What is being drawn is the view object generated from that metadata and sketch data. If you were to draw the t1 sketch in the scene, rather than importing it as a view polygon, you would see a red circle.
I wouldn't have thought to store as attributes, but neat compromise.
Not fully there yet, but first real progress on an updating TreeView for .g files:
first_working_QgTreeView_update_-2022-02-13-22-38.gif
How does it perform with 100k items and doing insertions/deletions in the middle? Responsive?
Haven't stress tested yet - still working on getting it not to crash in weird places.
I'm seeing crashes after return from main() again... appears to be some corruption. Trying to catch it.
@Sean this is with the latest commits from today? (presumably on Mac?)
Just committed one more change that might help
Got highlighting going in combination with tree updating: highlighting_with_tree_updating_QgTreeView_2022-02-16-16-49.gif
in command (incremental prompting for input) in qged: in_extrude_qged_2022-02-17-11-26.gif
Awesome!
Proof-of-principle scene obj from .g obj (one of the pieces needed for MGED's manipulate-before-saving-to-disk editing of primitives): view_only_g_obj_wireframe_2022-02-18-15-12.gif
@starseeker thinking of the recent PR, I'm wondering if we should keep track of who contributes to qged in AUTHORS or in a separate README/AUTHORS file that's qged-specific in case it moves
ddreeves70 was involved back in 2009 on the nurbs/brep ray tracing code ... wonder if he'd pick it up again to work on optimization
@Sean I'm game to keep a separate README/AUTHORS going, although it's not my intent to split qged out from the main repo yet. (I'm deliberately molding the backend libraries and the GUI together to try and improve both - when things get closer to a working/finished state we can have that discussion.)
That'd be cool - my thought though is new GUI first, then nurbs/brep ;-)
Here's a video of @David Reeves changes in action: qged_switch_to_quadview_ddreeves70_2022-02-24-19-10.gif
yeah, @David Reeves don't take my comment to imply a desire or priority or such, just a passing comment. you should work on whatever interests and motivates you!
That quad view support looks good, impressive change!
Thanks Sean and Cliff for the kinds word. Hopefully I can make more of an impact here. I will for now continue working on the QGED. Cliff I was thinking of looking into the HUD stuff next but if you think there is something better for me to work I am fine with working on the other stuff. Also if you guys will point me to test cases and the goal of improvement I will work on the NURBS stuff if I need a break from QGED. Also on QGED is it ok if I commit directly to main or do you want me to continue with pull request. Also on QGED if you are ok I would not mind doing a little bit of refactoring some inconsistencies with the HACKING guide. It bugs my OCD nature. BTW if you are discussing Author credit stuff on QGED for my benefit don’t worry about it for now. If QGED gets released you can give me credit then
@David Reeves Go ahead and commit directly to main on QGED, that's fine.
Please feel free to refactor to conform with HACKING - when working on QGED I've been trying to drive hard and fast towards functionality, and often times that means I end up committing something that works but doesn't satisfy all the style points. (Indeed, most of the spelling mistakes fixed in the recent pull request were my fault.)
@starseeker I apologize if it sounded like I was being critical that was not my intention. I am very inspired by what has been done so far. It is great work. The integration between dm fb and QT was no small feat
@David Reeves not at all! That was an explanation, not a complaint ;-)
If you want to see me complaining, watch me try and debug some obscure interactive problem in the Archer Itcl/Itk code :-P
@starseeker I'm open qged with moss.g noting displayed and I select view model axes and getting a core dump. When I look at what is happening I'm getting an assert on line 3155 of libdm->fontstash->stb_truetype.h if I uncomment lines 3151 & 3152 that you commented out 5 months ago due to a clang static analyzer report, it passes the assert and the axes display as expected. I'm not really sure what is going on but this is the core dump I was talking about earlier but I just didn't have time to open up in gdb to find out where it was dumping at. I'm curious why my code is all the sudden triggering this failure. I remember before my latest round of changes axes display was working. Any thoughts?
@David Reeves good catch. I'll re-run the analyzer on the previous commit to take another look at what the error was .
My first thought was that something about the view initialization might be different between quad view and the single widget - if you look through the backtrace, at some stage the call that's getting into fontstash and stb_truetype is getting input values from our code. If the view is the same between the quad view single-window version and the previous single-widget approach, then something must be initialized differently in the data structures
Or perhaps is uninitialized, which might explain why you're seeing it and I'm not - different random garbage can produce different results...
@David Reeves OK, I was definitely too aggressive in my stb_truetype.h changes. I've committed an adjustment that will deal with the immediate functional issue for you.
It's still an interesting question as to why quad and individual behaved differently, but a debugging session to ID what's different about the inputs to the font calls is probably the way to run that down.
One thing I do notice - I don't get a crash, but if I bring up the model axes as you suggest the location of X changes from the lower left corner of the screen (1st drawing) to what I think is the proper position.
@David Reeves So the bizarre values being fed to the text drawing routines for the initial passes (that are getting clamped by stb_truetype) are from dm-qtgl.cpp:342 - the glGetFloatv(GL_CURRENT_RASTER_POSITION, pos) call
For the first two calls, pos is set to {0, 0, 0, 1}
Only on the third does it return something usable: {702.445312, 621.945312, 0.50012207, 1} in my case.
@David Reeves Still not sure why the quad view triggers it, but looks like the odd text positioning behavior was a consequence of the glRasterPos being out of range in the first few draws - addressed by clamping the range.
fwiw, I see the same behavior in the single view w/o quad widget involvement here. In no case so far have I seen a crash here, so it may be you're getting "lucky" in having something in the right memory location to wipe out w/ the quad mode + invalid behavior combo...
Anyway, let me know if you still have problems with latest main.
@starseeker yes it appears to be working with the latest. Just to be clear my core dump was because I have debug compiled and I hit an assert which causes a core dump. I was not getting a core dump from a seg fault. The assert was because there was assert checking to make sure the delta x and y was greater than 0
I also could not figure out how what I had changed that would have triggered the functionality other than the axes display was close to the edge of the screen and in quad it went over the edge and triggered some functionality that caused the axes to be displayed differently than normal.
Did you mean to say the view axes vs model axes should be on the bottom left?
I'm thinking the model axes is displayed at the origin...
No, it's in the right place now - I was seeing it in the wrong place in the first frame, which was a consequence of the attempt to draw text off screen not working correctly - stb_truetype was "correcting" the numbers, and that ended up with the labels in the lower left for me.
It should be OK now - the only improvement would be to adjust the text drawing logic to support partial text drawing when the origin coordinates of the text label are offscreen, but part of the text would be visible coming in from the left. That would be ideal, but also a bit of a pain to get working so for me it's a lower priority.
And of course staring at it the annoyance got to me, so I had to try and correct it... I think everything should now be behaving correctly.
You can see the behavior in question if you drag the model axes to the left - previously you'd see the labels abruptly disappear as they reached the edge of the screen, now they should just be partially visible (as you would expect.)
Yeah that looks way better!!
Although I would like to position the view axis so that no mater how you rotate it does not go off screen. Are you ok if I make that adjustment?
Ok looking at util.c/bv_settings_init sets view axes.axes_os 0.85, -0.85, 0 then sets size to 0.2. That is why it will rotate off screen. I think I will adjust to 0.8, -0.8, 0. On my display I would be ok with adjust size to 0.15 but I think that will make it too small on other people's display
ok I guess I don't have commit permissions
I will make the change in the fork and do a pull request with my next set of changes
Hmm. I'll have to double check with @Sean on the commit permissions - still new enough to Github I'm not sure how that works
Not that big a deal having to work through forks just takes about 30 mins total to deal with and a little disruptive to the flow.
Now that I have already done it. I'm wondering if I have to delete the forked repository? I wonder if I can just commit to the fork then generate pull request whenever I'm ready for it to be merged into main?
if all i have to do is generate a pull request then may not even be 30 mins time
On the (rare) occasions I've done pull requests, I make a branch in my repo to hold the commits related to the specific change, make the pull request from there, update my local copy once the PR is merged upstream, then make a new branch for the next request. Dunno if that's the "right" way or not...
I'm looking at the project now to see if I'm able to fix your commit status...
@David Reeves I think I sent something...
pushed my changes to adjust defaults on the view settings. This will keep the view axes from rotating off screen
Also made a change to Qged in quad mode to show which view is selected.
@David Reeves Nice! Like the active quad highlighting.
Just FYA, it looks like your push undid a couple of changes in a prior commit. I typically do a "git pull --rebase" before doing a push to make sure I'm not overwriting any new commits.
Yeah in had an issue with my local git. Sorry about that
no problem - easy fix. So you're planning to dig into the primitive editing next?
Yes
David Reeves said:
ok I guess I don't have commit permissions
You do now! We didn't automatically add everyone with commit access on Sourceforge like yourself that already earned commit rights. I probably should at least invite people where accounts exist.
@starseeker anyone that's in the dev group on sf should just be invited if they had access before
-- we don't presently have a use case for the restriction having outside collaborator introduces.
maybe someone with a professional temporary relationship, but even that would be a bit of a strange designation and all it does is prevent them from creating/seeing our teams, which is not a secret
@Sean OK - I wasn't quite sure how all that interacted, but was trying to get @David Reeves up and running quickly. Hopefully I didn't make too much of a mess...
Thanks for the help. @starseeker are you having issues with your fb? I'm not seeing ert, maybe just a local issue but I don't think so. I think something has happened. Anyway let me know if you are seeing an issue in the mean time I will debug to see what is going on
@David Reeves I just pushed a commit to have ert enable the fb if it's off - does that help?
yes, i forgot about turning on the fb
I like those easy ones... they should happen more often
@starseeker so I took a quick look at mged. How do we want to approach this? Do we want to essentially copy functionality from mged to qged (libqtged)? Do we want to attempt to get this functionality in libged? I saw in your TODO that you didn't want to move edit functionality into libged, I'm sure I have missed some documentation somewhere but why? I'm just trying to catch up with what you have in mind I'm not attempting to push any particular approach. I'm fine with copying and editing into qged but want to make sure before I go that approach. Before it is done it will be several 1000 lines of code. Looking at edsol a lot of that will end up being needed in qged
I think it'd be awesome if libged had editing capability, but the interface for such a thing doesn't currently exist. That's why edsol and friends still live over in mged's folder -- they're kind of intertwined with a notion of an editing state, and libged is designed to be stateless.
same reason there's not a "sed" command or the "push" buttons interface (for programmatically clicking on the menu)
@David Reeves So there are really multiple dimensions to primitive editing. There's Qt widgets for each primitive (similar to what archer does) paired with basic wireframe updates in the view as parameters are changed (where I was planning to start.) Then there's the in-scene menus of the faceplate editing. That's where some of our most sophisticated in-scene interactions take place (pipe editing, for example) but that code is also heavily tied to MGED's state. I was initially planning not to replicate the in-scene editing and instead make sure the combination of wireframe displays + Qt widgets could do everything we can currently do with the faceplate GUIs.
My initial notion was to have the Qt widget create an rt_db_internal for the primitive in question (either a default primitive a.l.a. make if create a new one, or the current state of the existing specified object on disk if an existing solid's name is supplied) and then manipulate that internal with the editing operations, getting updated wireframes from the internal and using those to update a view object. Then, either when the editing has stopped for some period of time or the user explicitly requests a save, the rt_db_internal is written to disk.
Especially for large primitives like bots we can't continually stream all incremental changes to disk as the mouse is dragged, but the wireframe will have to be continually adjusted as incremental parameter states are set - that was why I was working on adjusting the drawing logic to be able to create view objects .g object data that are independent of the on-disk object.
Ok thanks I will work on it from that direction
@Erik If you're in the mood for a small OpenGL challenge, I had a heck of a time trying to get our standard librt raytracing output to properly display in a texture the way the ISST rendering does. All I could get was a pair of white triangles, despite making an attempt to follow what was done for ISST - consequently, the qtgl framebuffer implementation is still using the old school pixel method. I'm reasonably sure I was just doing something stupid, but I couldn't find it - feel like showing me up?
It's not critical obviously in the sense that the current method does work, but I was hoping to get the texture based approach working for both adrt rendering and our normal fb output and then just make it a setting to switch between them.
@starseeker Just letting you know I'm still looking at the primitive stuff. It is alot to get my head wrapped around. I think I'm about to get there now. Hopefully I will have something prototype by the end of next week.
@David Reeves No worries - I knew it would be a deep dive. If you're finding design challenges you want to discuss, feel free to bring them up.
Mainly right now I’m focused on getting something to work. Here are some things that I think are worth thinking about:
1) plugins should do all the work to connect themselves up to the application. For example there is code in main_window to connect signals and slots in the plugins. I realize this is probably a result of rapid prototyping but just something I noticed
2) I feel we need a layer of abstraction between components. For example the hierarchy and the other plugins should not directly connect. Signals from hierarchy should be connected to an abstract signal in the app. Interested plugins should connect their slots to the abstract signal.
3) I’m not a fan of the tabs for the primitive editing plugin unless there is multiple parts to the primitive. I don’t really feel like we should be editing multiple primitives at the same time.
4) thinking should add a filter field to the hierarchy component
5) need to be able to view from the hierarchy. Maybe use the eye icon everyone else is using. The plugins should give you multiple gui components. An editor for seeing details with a primitive but should be primitive specific toolbars, menubars, etc for that specific primitive. All that should come from the plug-in
6) long term would be good to have a undo history. Or even something like a revision history. Only for while editing
Anyway nothing huge For the most part I like the design pretty straightforward to understand. My struggle is more around libged, understanding what mged is doing, archer. Also have to understand the whole rt_db_internal stuff.I think I’m getting that now but working on the next step of implementing
something
David Reeves said:
Mainly right now I’m focused on getting something to work. Here are some things that I think are worth thinking about:
[snip]
Excellent comments overall, imho! Definitely appreciate your perspective.
6) long term would be good to have a undo history. Or even something like a revision history. Only for while editing
The plan here is heavily sitting in a patch, but the original plan is that most if not all GUI operations end up going through the libged layer, and that layer would automatically handle event dispatch, transactions, and undo. It doesn't presently, but that was how it was envisioned.
So a GUI app would be able to tie an undo button to the libged undo/redo commands and the right things would happen. Someone did work on this just last year, but it's not yet integrated.
And I should caveat that those those are specific to "ged-using apps", just my opinion (others may vary), and that apps that go through the higher-level OO API in rt^3 (aka MOOSE) have different constructs to work with. Personally, I think MOOSE is the better architectural approach as it already encompasses in-memory editing of objects very well (much better than libged), and it has a hard boundary (it's strictly a C++ layer above the C libs, so there's no possibility for cheating and couplings like you noted). Alas, they're still separate codes solving related but different problems inefficiently..
On the other hand, replacing archer has a different set of challenges, and building modular widgets on top of libged should be reusable long term if they're not tightly coupled as you noted.
David Reeves said:
1) plugins should do all the work to connect themselves up to the application. For example there is code in main_window to connect signals and slots in the plugins. I realize this is probably a result of rapid prototyping but just something I noticed
Rapid prototyping and trying to figure out/remember on the fly what was possible and what makes sense - it's been a long time since my Qt training. I very much doubt what is there is what it should truly look like - my main focus was making sure that plugins could trigger view updates and view updates could trigger plugin updates without either knowing anything explicit about the other and without infinite loops. I'm quite sure there are other considerations beyond those, but unfortunately they've not gotten any attention yet.
2) I feel we need a layer of abstraction between components. For example the hierarchy and the other plugins should not directly connect. Signals from hierarchy should be connected to an abstract signal in the app. Interested plugins should connect their slots to the abstract signal.
That's one of those Qt questions I didn't/don't know the answer to - whether there are concerns stacking layers of signals and slots on top of each other. I suspect you're right and an "app bus" approach makes better sense, but I did have an initial wary reaction in that such an abstract approach might make it very difficult to sort out which specific signal is triggering a slot inappropriately while debugging. Whatever design we end up with, I think we're going to want a "beginning-to-end" example showing folks how to debug problems in the signal/slot connections.
3) I’m not a fan of the tabs for the primitive editing plugin unless there is multiple parts to the primitive. I don’t really feel like we should be editing multiple primitives at the same time.
Sorry, not following - do you mean the tool palette, or the editing panel below it? The tool palette was intended to allow a Gimp style selection between different primitives to allow users to easily create new objects of each type, and then to be automatically set to the correct button when they select a particular existing object to work on. In essence it would make the primitives graphically "discoverable" for users. I've not thought through the design on that in any detail though, so feel free to implement whatever makes sense.
Just realized it currently does look like tabs since there's only one palette button present - see, for example, https://brlcad.org/~starseeker/qdm_videos/qged_qdm_first_merge.gif
4) thinking should add a filter field to the hierarchy component
Absolutely! The hierarchy widget also needs to be able to display attributes in columns and switch to a "flat" view (i.e. the ls command output) - I rather ran out of steam on it after (finally) getting it to respond to edits, but it will need at least one more round.
5) need to be able to view from the hierarchy. Maybe use the eye icon everyone else is using. The plugins should give you multiple gui components. An editor for seeing details with a primitive but should be primitive specific toolbars, menubars, etc for that specific primitive. All that should come from the plug-in
By "view from the hierarchy" do you mean adding or removing things from the scene, i.e. "draw" and "erase"? If so, definitely agree - the context menu and fixing the double-click behavior to draw and un-draw are also on the hierarchy to-do list.
6) long term would be good to have a undo history. Or even something like a revision history. Only for while editing
@Sean already addressed this one, but even beyond editing undo I'd like to actually make the .g file what you might think of as a "git geometry archive" where you could check out earlier states of the geometry from the .g file itself. It's quite an interesting problem in many ways, but unfortunately it'll likely be a while before we can devote the necessary resources to designing and implementing that infrastructure.
Anyway nothing huge For the most part I like the design pretty straightforward to understand. My struggle is more around libged, understanding what mged is doing, archer. Also have to understand the whole rt_db_internal stuff.I think I’m getting that now but working on the next step of implementing something
rt_db_internal is basically the transition from "what we pulled off of disk" to "data we can work with in memory". I've not studied in detail what MGED and Archer do internally for their editing state management, so if you need assistance there let me know and I can do some digging. MGED's internal state is rather confusing and messy at times, so I'd suggest using it mostly as a guide on how to do specific individual operations and unpack/generate data - it does at least do many things in C. Archer tends to do most things in Tcl/Itcl, so the "translation to C/C++" problem is a bit more involved there.
@David Reeves How are things looking? Need any help with the rt_db_internal concepts?
@starseeker It's going fine. Right now I'm good. I have been very distracted with a work project. I should be able to focus on this in the next couple of days. Thanks for asking.
Question: I'm reviewing all the code to get familiar with it. I'm wondering is it optional to use software graphics if it was compiled with opengl? It doesn't appear to be an option to select but seems to be code throughout to support it to be an option. Maybe the concept is there but for this pass it is being built to be determined at compile time but in the future you plan to make it optional?
Also what do you think about in the libqtcad if we get rid of the 'Q', 'Qt', 'Qg' prefixing and use namespace(s) instead?
Also I am seeing classes that is making attributes public. Do you have an issue if I refactor that stuff to be private and use inline accessor methods instead?
Are you ok if I rename CADQuad to be CADViewport now that we have modified it is always using that class as the central component vs either the QUAD or View.
On the software vs opengl I think unless we do a significant amount of restructuring it is not going to be something that is runtime selectable in the future. If that is the case do you mind if do a little refactoring to embrace the idea it is compile time selected. I think in places we can simplify the code. The more important issue is to get rid of the concept so we don't have to keep making these weird decisions about when to use the precompiler flags vs looking at type of graphics.
I'm going to get to the primitive stuff but I started looking around for the best place to inject it. In that process started noticing things that might make future growth easier and more consistent if we did a little clean up...
Hey I decided to create a branch to make the changes. If you don't agree with the changes we can always discard them.
qged_cleanup is the name of the branch
@David Reeves the software rendering is exposed as a qged command line option
qged -s
Generally speaking I'm fine with refactoring and renaming - @Sean may have some thoughts, but I'm not strongly opinionated - what's there right now can usually be classed as "well, it had to be called something..." if I wrote it.
@David Reeves I'd prefer to keep the runtime ability to use either software rasterizing or system opengl - there are usage scenarios where that ability is needed, and I did go to a lot of trouble to make sure we could do that.
In fact, the swrast rendering is not only a user option but qged will also kick it in automatically if the system OpenGL fails to initialize at runtime.
@starseeker ok yeah I guess I missed it. Thanks
@David Reeves I think your code cleanup ideas sound great to me. Definitely agree with scrubbing the technology branding. That typically makes for pretty suboptimal marketing, even when other devs are the audience (e.g., the sea of undifferentiated py* projects and rb*projects and *.js projects...). So if you're willing to put the thought into good naming, that should be a welcome improvement.
If the code for software rendering is spread all over the place, that would be an antipattern that might benefit from being abstracted/encapsulated better. The point of software rendering is that it just happens and application code is unaware, so if that's not presently the case, I'd think they're fair game to refactor. That said, as was mentioned, having a software rendering fallback is potentially a useful for a host of runtime environments where opengl isn't configured correctly, has a broken driver, isn't available outright, etc.
Although I may not have properly succeeded, I attempted to keep the software rendering vs. system rendering hidden behind the QtCADView widget - the idea is to use that, and the specific details needed for system opengl and swrast are encapsulated in the QtGL and QtSW widgets, which the app shouldn't interact with directly.
I didn't mean to imply that it was all over. I just saw it as I was code reviewing. I was following a particular path through the code and it keep popping up. I was probably flipping back and forth in the same set of files. Besides the QtGL and QtSW classes I have not really seen it too much. I think it is only in 3 other classes CADView, main and main_window. Then mostly only in CADView. I think there probably is a way to abstract away from it. However, it is not that bad, my main issues is I did something silly and didn't realize it was a selectable attribute. Again I think you have done a fantastic job with what you have done so far. I think it is important to clean up now while the code base is still fairly manageable. The more it grows the more work it is to clean up. The code appears to be the result of a lot of prototyping. Which is normal early on. The work you did on integrating with DM and FB is probably something I would have struggled with. This integration is what has me excited to work on QGED. There still some integration challenges but you definitely have tackled the toughest one in my opinion. The rest I feel I can help on.
@David Reeves One thing I could use some help on - I'm trying in event_filter.cpp to detect when my focus is on the display views, and then (and only then) trigger the N key binding to shoot nirt. Unfortunately I've not been able to successfully narrow the focus for that action - it either always triggers or never does (my function to detect if the active widget is a child of the widget of interest doesn't work - I suspect it may be because I'm not establishing proper parent/child relationships for the widget but am not sure.) Is this something you have experience with?
@starseeker when you get the focuswidget you need to use the main window. This method returns the last widget under that widget that had the focus. You are always checking from the c4 down instead of mainwindow down
static bool
widget_active(QWidget *w, QWidget *checkWidget)
{
bu_log("w: %p\n", (void *)w);
QWidget *fw = w->focusWidget();
bu_log("fw: %p\n", (void *)fw);
QWidget *cw = fw;
while (cw) {
if (cw == checkWidget) {
return true;
//return false; // Disable until parent/child relationships are fixed...
}
cw = (QWidget *)cw->parent();
bu_log("cw: %p\n", (void *)fw);
}
return false;
}
bool QGEDFilter::eventFilter(QObject *, QEvent *e)
{
CADApp *c = (CADApp *)qApp;
if (!c || !c->w)
return false;
// TODO - look into QShortcut, see if it might be a better way to manage this
if (e->type() == QEvent::KeyPress) {
// If we want to have key bindings that run GED commands, we will need
// application level information - the view widget doesn't know about
// the gedp. To do this, we check if the central widget or one of its
// children has the focus, and check if the key event is one of our
// bound events. If so, we may perform the bound action.
QKeyEvent *k = (QKeyEvent *)e;
if (k->modifiers().testFlag(Qt::ShiftModifier) == true && k->key() == 'N') {
if (!widget_active(c->w, c->w->c4))
return false;
@starseeker BTW you could have used application::focuswidget to return the widget with the current focus in the application
widget_active(QApplication *app, QWidget *checkWidget)
{
bu_log("w: %p\n", (void *)app);
QWidget *fw = app->focusWidget();
bu_log("fw: %p\n", (void *)fw);
QWidget *cw = fw;
while (cw) {
if (cw == checkWidget) {
return true;
//return false; // Disable until parent/child relationships are fixed...
}
cw = (QWidget *)cw->parent();
bu_log("cw: %p\n", (void *)fw);
}
return false;
}
@David Reeves Ah, qApp does do the trick - thanks!
@starseeker Update on my refactoring. I'm still working on it. I decided to go a little deeper than just naming clean up. It will most likely be next week before I have something you can review
@David Reeves No worries - I've got my head in another part of the display logic right now anyway, so it's a good time for what you're doing both tactically and strategically. The next big stab I take at qged will probably be to try and implement as much feature parity as possible with MGED/Archer, so we definitely want the foundation in place before all that extra code gets written/translated. There's probably still some backend work to be done moving things to libs (just to pick one example I know the fancy MGED pattern tool logic needs to move down at least to clone, and possibly deeper) but once editing support is in place it's going to be a matter of going through the GUIs, seeing what's missing, and making it happen :-)
@David Reeves How's the refactor going?
Progress is being made but I’m still in the middle part of the effort. I
was able to get some significant cycles today. But little slow last week.
Am I holding you up yet?
@David Reeves nope, not yet - just checking to see how it was/is going. I'll probably be focused on display logic at least a couple more weeks, based on what I'm seeing so far.
Very much prototype, but here's Level-of-Detail (LoD) in quad view, with each view showing a different level of zoom and detail. You can see the consequences of zooming out or in in the frames-per-second (FPS) meters in the lower left corners. lod_quad.png
@starseeker
I'm still at least a week or two from being done with what I'm working on. I finished what I was doing on the model stuff and I am working on the plugin stuff right now. Once I get that refactored I will have to update the existing plugins to follow the refactored approach. After that I'm going to refector the the whole GL/SW to use a base class so that we can get rid of all the #ifs. I pretty much think after that I'm done refactoring except I need to add all the standard comments at the end of files and document the classes and methods.
Question for you, I don't really understand what you are doing with what I think is, you are selecting multiple objects for editing? Is the concept that you can select multiple objects for editing then each object is highlighted in the tree and there is a button in the object editor dock?
@David Reeves
Awesome! Make sure with the plugins to test "both ways" - i.e., update the view from both the GUI and the command line and make sure the view info plugin gets the word, and make sure the view control plugin can update the scene without anything getting into signal loops. I had to add some debug printing early on to figure out when I was triggering signals incorrectly - presumably a better architected solution will help minimize those issues, but I'm still scarred from the early experiences so I'm paranoid ;-)
I've not hammered out the editing in substantial detail yet - are you referring to the highlighting? My concept is to select one primary instance for editing in the tree, with the highlighting indicating which other instances will also be impacted by the proposed edits. pinewood, for example, reuses the same objects in the hierarchy, so an edit to one instance of a solid in that hierarchy also has implications for the other instances. I wasn't planning to allow editing of multiple primitive solids or comb tree's simultaneously - just to visually indicate when that single object/hierarchy edit has non-local implications.
The highlights makes since. However, if the user has not drilled down then that path would not be highlighted? I hope the image comes through.. What I'm asking about is what is the function of the boxes I have underlined with the yellow line
Ah, got it. The boxes are intended to select individual tools, a.l.a. the Gimp image editor. So in the case of the view mode, you could select a measurement tool that would put the mouse into a "click and drag" mode like Archer's that would measure distances in the scene. Another button would bring up controls for raytracing, similar to the raytrace control panel.
For the editors, my original thought for the object editing was that each primitive would have its own button and custom dialog. If an object is selected and the object edit panel is selected, the panel would then default to the button corresponding to the selected solid. If a different button is selected, or the highlighted selection is cleared, then the default would be to create a new object of the selected type.
The instance editing is more limited in scope, in that it would only be editing the properties applied to instances of objects in combs, and it may not make sense to have a button palette for that mode. That's the editing area I've given the least amount of thought to.
The clone tool probably fits in there somewhere too - it would be really cool to be able to "preview" the results of clone commands before applying them. And there are surely others - longer term would like to make them all discoverable as "tools" to be selected from the palettes.
There are other capabilities, sometimes exposed as "modes" in MGED, that we want to express in some fashion. The "pick" component is one example, where a mouse click shoots a ray into the view or otherwise checks for elements along that line and selects one or more of them. There's also nirt, the angle-distance cursor, and probably others I'm forgetting.
@David Reeves did you have any thoughts on the GUI palette stuff? Is there a better way to go about that?
@starseeker Still working through this right now. However, my thinking is there are 2 types of plugins. One that provides core functionality. The other type is ones that provide more add on type of functionality. The core plugins are passed essentially a reference to the application that allows the plugin to add all types of components. The add on type of plugin works similar to your palette stuff but maybe slightly more prescriptive and restrictive. Prescriptive where you want the add on to go. Restrictive is instead of an open ended widget they specify using a property api that we provide from the platform. I'm working through what I think the component architecture should look like. I would like to provide a lot of flexibility but at the same time have a structure to how the components interact. If you have ever looked blender I like their whole editor approach to doing things. They have about 23 base editors that all the functionality is provided with. Then they have various layouts that the user can easily flip between depending on the primary function the user trying to perform. Anyway just trying to work through that design and how to connect the workspace/editors/tools/modes to be seamless and intuitive. Somewhat modeling after blender but more brlcad (CSG) functional. The idea is to get the concepts right from the start to allow easy extension of the platform. It is tough to balance between flexibility and complexity. Too flexible and it becomes to complex. However, a lack of the right amount of flexibility and it becomes too limiting. Anyway still baking on it I'm sure alot of the ideas I'm having right now I will just toss out in the interest of getting something done then later we can always do more refactoring to move the platform to more of what we are looking for.
@starseeker sorry long answer for a simple question. I'm still not sure about the core functionality coming from plugins. I'm concerned this will lead to alot of coupling between plugins.
@starseeker Have you ever looked at blender? What are your thoughts?
I've not looked at blender in detail (it's code is GPL, so that's a no-go, and I don't actually do much 3D modeling myself)
My thinking, for what it's worth, was to implement as plugins so that if we do end up having to significantly rework the app architecture down the road, the plugins would (mostly) consist of self-contained widgets and signal connections that could be repurposed fairly easily to a new setup. I've pushed a fair number of things all the way down into libqtcad, and non-GUI functionality lower than that (for example, the polygon manipulation logic itself lives in low level C libraries now and the plugin's primary role is to define widgets and supply mouse events.)
I had no intention of even looking at the blender code just looking at from a design point of view. I like your approach my question is where is the line between platform and plug-in. If we are not careful we can create solution where one plugin needs to connect their slots to the signals of another plugin. Are we good with that? For example we could put the outline in a plugin
Hmm. I think we definitely don't want plugins talking directly to other plugins.
@David Reeves Oh, meant to ask - as you had mentioned wanting to work on how the interface layout is managed, should I dig in to the primitive editing?
sounds like a good idea
Tab completion on the qged command line: qged_tab_completion.gif
Gradient background in libdm/qged: qged_gradient_background.mp4
Specific instance drawing control from command line (and successful raytracing of partially drawn hierarchy). (For context, file behavior is first shown in MGED and Archer.) qged_instance_drawing_2022-06-03.mp4
This one is a bit tricky, as the changes are invasive and will break MGED under certain circumstances.
@David Reeves If you feel like hunting for a bug, I've got one - when I enable quad view, do an embedded raytrace, and then select a different quad view, I lose the raytraced image in the quad view that is no longer selected. Something specific to the framebuffer - the wireframe is OK.
@David Reeves Possibly related - screengrab is grabbing the wrong image - it is getting part of the qged window??
I take a look today
@David Reeves Were you able to look at the above issue?
qged running on Mac:
Screen-Shot-2022-09-23-at-10.57.21-AM.png
Overall, looking really good; the hierarchy and command line are fantastic. Did get some crashes and the fb seems to not be working quite right, but was able to navigate around easily enough for basic display.
Here's one of the crashes:
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000008
Exception Note: EXC_CORPSE_NOTIFY
Termination Signal: Segmentation fault: 11
Termination Reason: Namespace SIGNAL, Code 0xb
Terminating Process: exc handler [31256]
VM Regions Near 0x8:
-->
__TEXT 105187000-1051fb000 [ 464K] r-x/r-x SM=COW /Users/*
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libged.20.0.1.dylib 0x0000000105589ff5 csg_wireframe_update(bv_scene_obj*, bview*, int) + 1125 (draw.cpp:140)
1 libdm.20.0.1.dylib 0x0000000105e47f84 draw_scene_obj + 100 (view.c:558)
2 libdm.20.0.1.dylib 0x0000000105e48018 draw_scene_obj + 248 (view.c:566)
3 libdm.20.0.1.dylib 0x0000000105e48018 draw_scene_obj + 248 (view.c:566)
4 libdm.20.0.1.dylib 0x0000000105e483c6 dm_draw_objs + 566 (view.c:749)
5 libqtcad.20.0.1.dylib 0x000000010539e767 QtGL::paintGL() + 791 (QtGL.cpp:156)
6 org.qt-project.QtOpenGLWidgets 0x0000000106a6635a QOpenGLWidgetPrivate::invokeUserPaint() + 234
7 org.qt-project.QtWidgets 0x000000010ac15491 QWidget::event(QEvent*) + 1999
8 org.qt-project.QtOpenGLWidgets 0x0000000106a6707e QOpenGLWidget::event(QEvent*) + 500
9 org.qt-project.QtWidgets 0x000000010abd7775 QApplicationPrivate::notify_helper(QObject*, QEvent*) + 247
10 org.qt-project.QtWidgets 0x000000010abd8660 QApplication::notify(QObject*, QEvent*) + 532
I was clicking around on the hierarchy widget when it did that.
That looks like it might be related to the adaptive CSG wireframe drawing - that's in a funny state right now, I've got some work to do there
Little surprised it'd be on - I thought I had that off by default right now
main may be a bit iffy behavior wise - I've been reengineering the drawing layer for scaling to large models in the draw_rework branch, so I've not checked the main behavior in a little while
Once I'm done hammering on that I'll have to do another demo video, particularly for the selection bits
@David Reeves were you still planning to take a look at the quad view issues above? I'll need to start digging into it fairly soon, but didn't want to duplicate effort if you're looking at it.
Update - I think I have an inclining of what's going on, just not sure what to do about it yet. The mechanism being used to add an "active" border around the current quad window is also resizing the window, which triggers the resize callbacks. That in turn invalidates the old embedded framebuffer contents, since the fb size has changed along with the window size.
OK, I think I may have found an approach - needs to be tested on more than just Linux to make sure it behaves
Cute - looking into the screengrab problem, it is specific to the QtGL backend - the QtSW (software rasterizer) version works. The actual OpenGL version is pulling its image from the lower left corner of the app window. Probably means the whole window is being rendered with OpenGL and the naive glReadPixels is active for the whole Window, not just the QtGL widget.
Yep - need the Qt widget for that case.
@David Reeves if you get a chance, I'd be curious if you can test the draw_rework branch qged
I have been hammered for a while with my paying gig no relief in sight. Go
ahead if you need to
Last updated: Jan 09 2025 at 00:46 UTC