@starseeker something about r76818 feels really off (designwise). I get what it's for, but it feels rather wrong design approach to me so highly specialized to points and new nomenclature (inside/outside) when the concept itself is common. you basically implemented the union operator on a point set.
A generality we've had on deck for a while is the introduction of a "cg" command to "compile geometry", which would bake/simplify a comb with expression "volobj u pntsobj" into a newcomb with "newpnts" expression. This would be an excellent first step.
That would set the stage for other automatic simplification, tree contractions, null object detections, etc. even if initially it only does this one union pnts case.
I'm not quite following - maybe it's just me, but I would expect "cg volobj u pntsobj" to return something that incorporated both volobj and pntsobj. I think we're looking at intersect and subtraction operators here(?) which would make it:
cg pntsobj + volobj = pnts_inside_volobj
cg pntsobj - volobj = pnts_outside_volobj
cg pntsobj u volobj would have to just return pntsobj, since volobj wouldn't have any points. For pnts on pnts interactions it would be better defined:
cg pngsobj u pntsobj2 = all_pnts_in_both
cg pntsobj + pntsobj2 = pnts_only_in_both
cg pntsobj - pntsobj2 = pnts_in_pntsobj_but_not_in_pntsobj2
There would have to be some convention about the primitive type of the lefthand object dictating the type of the output object, or something along those lines.
We'd also have to think about what something like cg sphobj u pntsobj would mean - do we expand the sphere to contain the points? What do we do with (say) cg sphobj u sketchobj?
@Sean is r76830 reasonably close to what you were thinking?
no, no -- it would rely on existing object, not be args, and act just like a compiler. instead of an expression, you'd compile a given object like "cg some_comb" or "cg -o compiled_obj some_comb".
So you'd have to make a comb and then evaluate it to do the point subset op?
yeah, the compiler would be written to recognize known patterns (just like the c/c++ optimizer in cc). so if we have some_comb = "volobj x pntsobj", it'd see OBJ1 OP OBJ2 and match OP==intersect with either OBJ1 or OBJ2 with a pnts type, and know it can reduce/replace some_comb with a new pnts object
codewise, it still just calls exactly what you wrote
just, UI is through a generalized optimization that we can extend with other optimizable patterns
I guess my main difficulty is that's not going to be at all what I think of as a user when I'm asking "how do I figure out which points are inside this volume?" I'd never think to look to that feature to get there, personally...
like detecting subtractions that do nothing, or replacing huge bot subtractions with minimal bot subtractions, etc
well you wouldn't necessarily think to perform boolean operations to model geometry either -- that's just fundamentally learning the system
the generality and simplicity would give it powerful flexibility too
I guess I think of it as a fundamentally analytic operation rather than a geometric one... I would probably have tried to get it in the analyze command somehow.
Anyway. I'll see if I can tweak the existing cg code to look for a comb of the right form, but anything involving tree walking usually means a headache...
also, you're currently concerned with points, but the question of what's inside something else is not intrinsically specific to points, and the idea of the concept extending to other object types quickly gets out of control
Agreed. Conceptually though, what's the distinction between a CSG boolean expression and a comb tree? Isn't the former a serialization of the latter?
just as likely to what to know what rpp's or sph's are inside some other object, or what triangles are inside another mesh and so on.
comb tree is an object whose structure can be queried
Right, but I mean from the cg command's perspective - why not feed in an evaluation string? (As one of the options - not arguing against combs)
an expression is just text. maybe conforms to a structure, maybe not, is it ad hoc? is it the same as comb command's expression parser or the ted command's or some other? is it prefix? postfix? infix?
Sure, but we already have to solve the problem anyway for the comb command.
starseeker said:
Right, but I mean from the cg command's perspective - why not feed in an evaluation string? (As one of the options - not arguing against combs)
Same reason compilers typically don't support something like: cc "int main() {printf(\"hello world\\n\"); return 0;"
Heh - I would have thought they would, actually...
I mean they could, but imagine how a compiler expands in complexity...
there's simplicity and improved usability in having consistency, you provide a file
making cg actually mirror cc would be good for the same reasons like having search mirror find
/me winces. I see the argument and I'm willing to give it a shot, but it feels to me like a significant sacrifice in usability and intuitiveness to require compiler-esque thinking to do geometry operations.
So what do you want me to get set up in the cg command to be "minimally functional?" I'd like to leave the functionality at least exposed (undocumented) where it can be accessed
I don't deny that it's non-obvious, but I think it's one of those things someone learns just once and then uses it and takes it for granted.
there is an alternative design we could go with that avoids the expansion of per-type subcommands
instead of cg
cg's "intent" is to reduce / optimize / evaluate geometry into a more optimal, condensed, pre-evaluated, or compressed form
you're wanting to filter geometry
while a compiler can do this particular filtering because it's a reduction, there's plenty it can't do as a compiler
towards that end, we could go with a different verb / action command that's more focused on lower level processing like an "inside" command (or "evaluate" command with an "inside" subcommand)
probably a way to merge it into "analyze"
That feels like a conceptually better fit to me - the cg compile could then call the analyze operations as needed for the reduction.
We'll need to tweak analyze - right now it takes a list of object names, no subcommands.
yeah, and this is a dramatically different "output" so does beg for subcommands to differentiate
I guess the current behavior could be characterize as "summary" - existing behavior of analyze could then become "analyze summary", which in principle might be considered minimally impacting...
or "summarize" if we're going noun verb
could also go the switch route ala gqa
since it's the other "analyze" command that it'd likely merge with eventually
I'll experiment with refactoring analyze a bit - that's pretty simple, from the looks of things. I'll do it in a branch though - for release I think I just need to back this whole thing out. The immediate need was met, so there's no pressing need for the pnt in/out test in the main release right now.
actually now that I'm looking...
I'm personally not a big fan of gqa's option handling... lots of negative user karma over the years.
I suggest mirroring "analyze" like the "check" command, at least in synopsys terms
/me checks check
right, that's why I say it
nobody remembers -Av
but "check volume obj" is pretty darn simple and declarative
/me nods - slightly different mechanism, but this looks similar in some ways to where I ended up with the brep command
yeah, mechanism doesn't have to be the same
I see check's not using bu_opt yet, so that'd be an improvement
is check supposed to replace or merge with analyze eventually?
/me is mentally substituting "analyze" for "check" in front of each of these and so far they all work...
I suppose they could but there is a problem with the different output types and that might be a reason to keep them separated
/me nods
as analyze is currently written, yeah, they totally merge as the summary it produces is also analytic, numbers, info
it's having a command whose output is geometric that becomes very distinguishing
not to say the concepts couldn't merge cleanly
analyze volume -o output obj1 obj2 could work for example, but I'd probably expect 'output' to be an attribute object of some sort
or a txt even, in that case...
well, bo in BRL-CAD
we don't have txt objects yet
we have binunif, that'd be an alternative
Ah, right - couldn't remember where we were with that.
better would probably be a json-style rich object
that way it could structurally contain the whole report in structured form
there is one issue, though..
how the usage would look for this command, without having completely different syntax (i.e., complexity), what [object(s)] means
e.g., analyze inside -o output obj1 obj2 obj3
could limit it to pairs of objects, then it'd be consistent
/me figured the subcommands would have their own syntax
could change the command too, instead of "inside" it could be "intersect" and it applies that operation to all object args, so "obj1 obj2 obj3" is treated as if it were a "obj1 x obj2 x obj3" comb expression
/me is OK with limiting inside to pairs, at least for a first cut
starseeker said:
/me figured the subcommands would have their own syntax
that's usually a sign they either don't belong together, or the design hasn't been sorted out to make it consistent ... otherwise, there's little point in them being subcommanded together when all the args that follow are different
we have a few mashups like that currently
they're usability nightmares imho
I know them, and I have to pull up the usage every time. that's just bad design.
/me sees it mainly as a conceptual grouping - sort of like svn's subcommand's are all SVN related, even if they're individually very different.
Or the per-primitive subcommands
er, nearly all of svn commands just take paths... super consistent
the deviation is on their options, not on what follows the options
big difference
I guess I was thinking about svn merge taking two paths, where svn add can take many - that sort of thing
well, three if you count the destination
propset and friends I think are fairly structured as well
I'd just like to also acknowledge that considering the complexity of a revision control system is probably a terrible usability pattern to follow regardless ... :D
s/acknowledge/propose/
/me grins - I actually go the other way. I think it's a good mapping to the conceptual complexity of the problem spaces we're working with.
I mean we're not talking about anything anywhere near as complex -- nearly everything we have can be simplified and made consistent because we have a specific taxonomy that is constrained
the complexity a developer puts up with is not on par of that of general users
we have a highly biased perspective as devs. it's helpful to be aware of that and resist complexity where we can through more considered design.
i'm happy to think through the design implications if you're not -- I just don't think we should punt and add options and have 2/3/4 synopsis lines to document command where we can avoid it
this seems like an easy case for example. we can just apply the same op to all listed objects in succession
Well, there are subtitles with that - what does it mean, for example, to have a sph followed by a point cloud followed by an arb?
That's the hesitation your sensing, I think - we can probably come up with a two object interpretation easily enough, but I'm not sure how it scales.
the operation you're using is still simply an intersection
A intersect B
A intersect B intersect C
but what's the output? the points from the point cloud? the subset of the sphere from a gqa segment grid that overlaps the volumes of the other shapes?
the output is the geometric intersection of the three sets...
A intersect B where either is a pnts object will always be a pnts object or the nul set
that holds for any number of subsequent intersections
always a pnts result or nul
also generalizes nicely and can be extended to things like bot intersect bot
So for points we have an answer - what should we return for (say) two CSG spheres?
we don't have a single implicit object that can represent the result, so I'd expect it to either return a comb with "A x B" (assuming we're talking about geometric output) or return a nurbs object with the evaluated result
would be an awesome place to hook the latter
Now that you mention it I think there is a brep intersect subcommand somewhere...
could also envision there being something like a -t type option where you could specify bot, dsp, or some other object type
there is
I guess the other risk of putting a general syntax out there is users are going to try a whole bunch of things we don't have implemented - won't there be a risk of negative perceptions about our capability/quality/completeness?
facetize does it for bots
sort of another version of the step importer not handling a lot of things...
I think it'd be fine if it aborts with an error saying "ERROR: analyze intersect support limited to 'comb'+'pnts' objects" for starters. Documents the scope, sets expectation, and is easily expanded as support expands.
not that we should leave it that limited for long. I think the comb fallback would be good to implement off the bat so it's well behaved on anthing
So another interpretation question - if I intersect two pnts sets, do I return only the points that are in both sets within tolerance?
that would be my expectation and interpretation, yes
someone wanting something else implies another operation, like this: analyze convexhull -o hull pnts; analyze intersection -o volpnts hull pnts2;
OK. That's a separate implementation, so I'll have to think about how to do the pre-processing - whether to find and intersect pnt clouds first and then check them against other volumes...
I'll make a branch and work with it a bit.
Another interpretation question - what if someone unions a point cloud into an otherwise volumetric CSG hierarchy? Is it a no-op under those conditions?
you mean intersects?
Well, I can think of a couple cases. If we do want to intersect it, what if the CSG hierarchy subtracts a volume from the point cloud? Do we first "strip down" the point cloud per the CSG hierarchy and then intersect it?
if it's unioned, it'll be essentially a no-op unless the ray happens to hit the pnt exactly or the pnts have a diameter
the functionally act like spheres
so really no different
just the radii might be infintessimally small (or they could be huge)
But if we intersect two point clouds as discussed earlier (toleranced distance comparison) then the point cloud will act differently in a hierarchy
Would that be surprising?
keep in mind that would be an analyzed intersection, which is what allows the tolerance and resulting behavior to be specified
the result is a pnt cloud
i'm not entirely following -- what would be surprising?
if I create rppA, pntsA, and pntsB; and combA is "pntsA - rppA" to cull some point, that's all pretty straightforward
if I "analyze intersection -o pntsC pntsA pntsB", thn pntsC will have those in common to both
explicitly
if I "comb combB pntsA x pntsB", thats the same result, but implicitly
So, given a pnt cloud A intersecting with a hierarchy B containing pnt cloud C. If pnts in C are close within tolerance to pnts in A, such that A intersect C would yield a subset of point common to A and C, but A intersection tests against the hierarchy B using a ray in/out test points that would be part of the intersection only by the action of the point cloud C (i.e. none of the rest of volume of B interacts with A) would be skipped because the raytracing test would fail.
that's one massive run-on sentence... I'm having trouble parsing it
So in the limit case, intersect point cloud A with the hierarchy B containing A and a sph C that doesn't overlap A at all.
A ray inside test against B from any point in A will report outside.
But A intersect A will return the set A.
when you say intersect, do you mean in the context of a comb intersect operation or this new intersection/intersect command?
the new intersect command
okay
I'm thinking about how I implemented the inside/outside test and what the implications are (or trying to - it's causing run-on sentences...)
if you have a point cloud A and a hierarchy B containing A (unioned), the result is going to be point cloud A...
But I don't think it will - the ray interrogation won't report inside unless something other than A in the hierarchy contains pnts in A.
Or put A' rather than exactly A in the hierarchy - just enough to throw off trivially exact hits, but close enough to pass a tolerance distance check.
I'm not saying what your code does, I don't know what it does.. i"m saying set theory is that A x A = A
Right. But my point is I have to do two different comparisons when faced with A vs A' and A vs B - the latter uses the solid raytracer, and the former can't
So I'll get different answers for A' depending on whether it is inside a hierarchy B or not, which seems wrong.
are you talking about the current limitation that pnts don't report hits from shot()??
Kind of - but I think it's more fundamental than that...
if so, then yeah absolutely have to handle pnts specifically because they don't have a raytrace interpretation
ignore the tolerance issue first
Right, so when we evaluate B (which uses the raytracer) there's a problem if a pnt cloud is included in B since the test won't work.
right but that's specifically because of this implementation detail, not theory -- you have to handle pnts
or implement shot() for them
I'll have to construct a test case and see what happens, I guess - I think I know but I might be wrong.
it sounds like you're talking about an implementation concern, which is very much a yes -- you have to handle pnts and by handle pnts that means you have to recognize one is a pnts object and for each point, evaluate it against the other object
and if that other object is a comb, you'd have to scan it's logic and look for pnts (applying the right boolean recipe down the hierarchy) to evaluate them with a pnt-to-pnt distance calculation to get the right result...
But if the other object is a mixed object (pnts unioned with other solids) what is the interpretation of the pnts object in the hierarchy? Is it OK to return none of those points since they're zero volume in a CSG hierarchy of solids, or do we need to somehow figure out which points in the hierarchy survive the boolean ops and test them as individual points?
or implement shot() and just look at whether your point is within the other object's partition list... ;)
starseeker said:
But if the other object is a mixed object (pnts unioned with other solids) what is the interpretation of the pnts object in the hierarchy? Is it OK to return none of those points since they're zero volume in a CSG hierarchy of solids, or do we need to somehow figure out which points in the hierarchy survive the boolean ops and test them as individual points?
to return the right set.. the latter!
unless you implement shot()
Assuming my test rays happened to pass close enough to the other points to register... this is starting to sound nasty.
you know all it takes to minimally implement shot() is to iterate over all pnts and call sph_shot(). could even create a db of spheres during prep so you get spatial partitioning.
without implementing shot, you can just check the comb and only support whatever is easy
reject the complicated as not yet implemented
heh - another use of the db_search filter
like you could support pnts+non-comb for starters, don't even support combs.
but seriously, shot() on pnts is downright trivial to implement
i I didn't have these commits I'm working through, I could have had an initial demo working already :)
I don't know that that would be enough though - six rays for inside/outside solids vs. distance comparisons of pnt clouds are still different tests.
six what??
that's how I'm doing the inside outside test - looking in +- X, Y and Z directions
uh, don't do that...
Are the numerics solid enough to trust one ray?
especially for bots and the like, I was concerned with edge cases
So you're willing to 6-folding the computational cost because of concern, without actually knowing...
heh, even if you care about edge cases, 3 rays would've been adequate
rays are infinite, you just need to backout the starting point
/me shrugs - I was throwing something together to see if it worked, wasn't especially worried about performance.
that said, for this, I don't see without evidence why you'd want more than 1 ray per point. you shoot through the point from outside the bounds, and if that point is within a partition, it's a match
starseeker said:
/me shrugs - I was throwing something together to see if it worked, wasn't especially worried about performance.
heh, the same books that talk about premature optimization also talk very adamantly about making order-of-magnitude changing decisions like that without considering the implications!
that's so far beyond not being worried about performance...
I knew there are grazing rays on objects that report solid partitions beyond their volume, so I just threw multiple directions at it to preclude that being an issue. Your right 3 would have done it.
even still
you're talking about plate mode
I've seen it on NURBS
(solid NURBS)
true, plate mode and nurbs can return incorrect partitions on grazing rays
but it's not right and shouldn't be allowed to spread like a disease to other algorithms because they got it wrong -- at least that's fundamentally wrong in my humble opinion
we should fix them, or get whoever is wanting this feature to pay us to fix them
could also reject those two -- relatively rare at this point -- outliers in the analyze command when it validates the args
plate mode is just wrong-wrong and shouldn't be accommodated anywhere imho.. nurbs is obviously a harder issue but can probably be handled in code by looking at the grazing flag and only shooting a second ray when it grazes
@Sean as for who is wanting the point-inside-solid feature - it's just something I threw together for Caleb
I know, all the more reason I'd be concerned with what's left .. he's effectively done already
(i mean left in the code, it's going to long outlast his involvement and residual impact)
Well, the original bit was part of the facetization work for Dave a while back. It's used by one of the fallback algorithms.
(I already backed out the bit I put in for him in trunk.)
That's why I was way more worried about robustness than performance at that juncture - I think it was some sort of crunch project for Dave.
Honestly, I think you should just shoot a ray through the point and if it returns a bogus result because the partitions are wrong, so be it -- that puts pressure on fixing the partition list, however small, over time. and it's completely justifiable.
At the expense of distorted or failed factorization cases that would otherwise succeed? (Maybe that's not a serious risk - as you say I don't have stats on if or how frequent it is - but it would be a failure where we could potentially have succeeded.)
I don't mean to put you on the defensive.
Sorry. I'm game to clean it up, if it makes sense to spend time on it.
I'll give you facetization just because we've hammered at that for decades unsuccessfully.
I don't think that liberty extends anywhere else in the code, though.
Sure. I only pulled it from there 'cause when Caleb asked I remembered it being there - if it comes out "for real" we can certainly clean it up.
Until we can facetize reliably, or at least consistently complete, whatever that means, that's very much in the "lets get anything working camp".
(By the way, one other case I know of where a ray test will fail is small triangles in meshes - I hit that with the JASP work earlier this year. I've got a unit test in there somewhere were I captured and isolated an individual triangle that gives the wrong answer.)
That's why I'm using Dickinson's code, slow as it is, for the new brep meshing.
I'd still cringe at 6x vs 3x performance decisions in there if they're frequent, but as I said -- it is what it is until it works and facetize gets a hard pass because we've tried for so long.
I think it DOES matter the moment that code is used anywhere else though. Might as well not exist for use outside without having a critical re-write if it's been developed with a gloves-off approach.
Fair enough.
starseeker said:
That's why I'm using Dickinson's code, slow as it is, for the new brep meshing.
With all due respect, this is rather dangerous thinking from an architecture and maintenance perspective. It means we have multiple shotliners doing different things. Both being maintained, both eventually being debugged. Both with potentially completely disjoint sets of bugs.
It's a green pasture fallacy. One works with a code long enough to find a flaw.
and then abandons using that code because of the flaw, in favor of something else less known but maybe known for that case
that's literally the struggle mentality we have with the s2 folks that have been replicating their own geometry code instead of investing in one place.
short term benefit, long term death
It's the src/libbg/trimesh_pt_in.c point-in-polyhedron test. I ran it directly, side by side with the ray based inside/outside test and inspected reported differences - I'd be glad to ditch it if the issues with the ray test can be resolved, but I'm not sure yet how to do that without potentially impacting performance in some other way or introducing other breakage.
(I remember when Keith tried to adjust our triangle intersection logic for small triangles and ended breaking the raytrace at larger scales, so I know it's not trivial...)
I know, that's why I didn't say anything when it was introduced, but those kinds of decisions do have significant consequence long term. If there's not time to debug it when you have the error right in front of you and a task that's provoking it, when else will it ever be prioritized? It simply won't and we end up with the duplication, complexity, and increasing costs.
it's a paper cut. any one of them can be tolerated.
and then we spend months defending against things like particular three-letter systems that can be adapted and changed far more easily because they aren't having to maintain that kind of cruft. and often have just as many or more issues, just not yet noticed.
if someone came across it perusing code without knowing the history, they would be entirely justified replacing the perceived duplication without hesitation for something in bn or rt. best we can hope for is putting up signs in the code that typically just scare away anyone but the original author from changing the code.
And I agree that's far from ideal, but I'm not sure what else to do.
could debug the small triangle case, understand what's failing (if it really is failing) and why (more importantly). point in polyhedron is a very simple time-tested function in use throughout industry. there's only a few variants that mostly center around when/how a particular division happens.
If ours is failing, it's probably related to floating point tolerances used in the implementation, not likely the algorithm itself.
I've taken an initial whack at the analyze command refactor here: https://sourceforge.net/p/brlcad/code/HEAD/tree/brlcad/branches/analyze_cmd/src/libged/analyze/
Current usage looks like:
analyze obj.s ... - old behavior, except when obj.s matches a command name.
analyze summarize obj.s ... - old behavior
analyze intersect -o interior.pnts pnts.s sph.s ... - intersect pnts object with one or more volumes, generate output pnts object
analyze subtract -o exterior.pnts pnts.s sph.s ... - subtract pnts from object that are inside one or more volumes, generate output pnts object.
I also made a stab at cleaning up the inside/outside test - it's in op_pnts_vol.cpp, uses one backed out ray.
Last updated: Jan 09 2025 at 00:46 UTC