Came across this today: https://medium.com/@taying.cheng/understanding-real-time-3d-reconstruction-and-kinectfusion-33d61d1cd402
Wonder what license his code is using?
Don't see one on github
Interesting use of OSPRay: http://www.sci.utah.edu/publications/Wan2020a/TAMR_final.pdf
Not exactly research, but this is one of the coolest uses of dll injection that I've seen in a long time: http://kylehalladay.com/blog/2020/05/20/Rendering-With-Notepad.html
http://cfg.mit.edu/content/inversecsg-automatic-conversion-3d-models-csg-trees
https://kevinkaixu.net/papers/wu_sgp18_csg.pdf
Looks like EUGP would have probably accepted @starseeker's paper
or siggraph asia, apparently. nice paper.
Saw this machine showcased a couple years ago. Nice to see them making progress: https://www.digitaltrends.com/news/kamp-c-3d-printed-building/
Not exactly research, but relevant to coding research, came across this interesting SIMD wrapping library: https://github.com/simd-everywhere/simde
All of the High Performance Graphics 2020 sessions were posted to YouTube: https://www.youtube.com/channel/UCj04DHvbrUbuShWKzcTVJqg
Interesting paper on more robustly converting a point cloud to mesh, using deep learning
impressive denoising: https://www.sciencedirect.com/science/article/pii/S0010448520300506
@Sean is this of any potential interest for bu_malloc and friends? https://github.com/mjansson/rpmalloc
it sounds similar to the memory allocator I wrote a couple years ago for libbu's small memory allocation. would be interesting to compare their performance since that's what our performance-critical code calls into.
not sure how useful it would be for bu_malloc in general, though. I've already weeded most of the calls out, only a couple outliers. (if it's performance critical, the best solution is to avoid allocating, you allocate before processing which takes no time)
the only place I know of that's production memory hungry we've not optimized is opennurbs and it's already got an allocator slot to override the standard allocator -- that's back to comparing against the BU_GET/PUT small memory allocator and I bet it'd be close to a wash.
when I implemented that allocator, jemalloc was the fastest I profiled and my numbers were about as higher as this one is claiming.
@starseeker unrelated, just talked to jeff the past hour. sends his regards. he noticed all the github thrashing with the uploads, I gave him the low down.
fwiw, the latest ospray has a pretty cool superbuild option in their build logic that is nearly identical to what we've been talking about. the superbuild downloads and does all their dependencies before it does ospray itself.
wow, and it actually worked flawlessly straight out of a cloning, including 7 significant dependencies
Nice! What do they do about ispc?
That was the really tough nut last time I looked at that bootstrap problem.
Hah! Hadn't heard from jeff in a while, but I've been following his github star traffic - I think he's scared up about a dozen small things I'd like to check out at some point.
My regards to him if you should chat again. Maybe we can get him to poke his nose into BRL-CAD + Ospray again once we're on github...
Ah, OK - they're just downloading the ispc binary
soo apparently there's a high performance low level GPGPU abstraction named HIP and AMD went all out on and ported a solid impl for nvidia/cuda (amd wants into the hpc datacenter bigtime, and they're swinging hard with those epyc's, I'm lurvin' my dual 7702). I wonder if that will possibly supplant and/or supplement opencl?
probably supplement if I had to wager a guess. link to it?
https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-terminology.html is some, I forgot where I saw the thing that got me looking
oh I like dat table tho
https://www.sciencedirect.com/journal/softwarex
oh, that is interesting! thanks @starseeker
a certain visibility calculation that I've been working on seems super appropriate there
https://www.khronos.org/blog/vulkan-ray-tracing-final-specification-release
http://graphics.cs.uos.de/bezierguarding.html looks at first glance like something I was trying to do in a more ad-hoc fashion during the meshing work... code is MIT licensed too.
https://ranahanocka.github.io/point2mesh/
The first paper is out of my depth without study.
The second paper is almost the same shrinkwrapping method I've mentioned we could/should probably implement before, only using a neural net to pull the points in intelligently instead of using an ant-crawling algorithm to walk the mesh points inward.
That latter is also a great example of something that we should continue to work towards -- they use blender as the processing engine similar to how we use mged.
I see the infrastructure they are using is heavily python based - I wonder what it would take to recode the core elements of the algorithm on top of something like tensorflow's C API...
Heh - this one goes the other way, in to out: https://www.cs.bgu.ac.il/~asharf/Projects/BLOB/index.htm
That's what I was referring to - blender's interface is a python one akin to how ours is tcl
I think our ged interface is considerably more powerful design than what blender exposes, and easier to use, but they're geared for different things.
it would be interesting to study and see if we could find some alignment
oh, you're talking about the use of python rather than implementing the point2mesh feature?
yeah, the interface and methodology, not necessarily this particular use case.
Ah. My mistake. I was thinking we wanted the feature
this particular use case is of course very interesting in its own right. It would be cool to implement what they're doing in c++
I didn't see anything terribly complicated in there.
the hardest bits would probably be the hooks it's using in blender
Unless I'm misunderstanding there's an extensive python ecosystem in back of this - pytorch, meshcnn, pytorch3d...
that's how most python works... even simple things can be an extensive ecosystem of plugins and "libraries"
/me should probably take some classes on CNN/DNN - I know virtually nothing about it
it's actually nice that the project's code is relatively simple, so each call entry could be tackled fairly independently
you should, it's awesome stuff. AI was a specialty of mine in undergrad. Did a lot with all kinds of NNs before they became a thing everywhere.
I'll have to look at that paper more closely - visually, their initial mesh looks like a decimation of the convex hull but I'm not sure what criteria they used to set it up...
I still like GAs more, but NNs get sh!t done fast. Eventually, a GA+NN will replicate every aspect of human neural processing.
yeah, that's what I saw too. get a convext hull and then do some sort of CNN transform on the points to pull them in intelligently.
is tensorflow still the 800 pound gorilla in the field?
tensorflow is the jungle
<chuckle> Me thinks their dev team should drive around with that on a bumper sticker...
you may find the basic concepts are usable even without the NN's, you could just do what they're doing and when it comes to the decision point where it calls into the NN, you just do the naive decision.
plus, you can use the NN without tensorflow or whatever else
tensorflow lets you build and evolve NNs. They presumably aren't training an NN -- they trained one and are using it I think
so you could use the one they came up with. it's likely encoded in there somewhere.
I suspect I lack the background to follow that part... the implication I got was they were somehow using the point cloud to guide the CNN
they most certainly are
So there's another model of some sort they would have created in advance?
NNs are really just super basic inputs-to-output functions. so it's probably set up as feed points in, get points out.
the black box is what to do to the points, and that's typically trained in some manner
self-prior is a term specific to NN then? That's one of the bits I haven't decoded yet...
Oof. Got to pack it in for the evening - back tomorrow.
Nah, that's just some package's variable naming if I had to guess. self is a python thing. prior probably refers to a previous convolution of the neural net (happens during training)
self in python is similar to 'this' in C++
it's just a convention for a particular class scope so you can access member variables and functions, so there's apparently one called "prior"
https://www.phoronix.com/scan.php?page=news_item&px=Zink-OpenGL-3.3-Mesa-20.3
https://people.unica.it/gianmarcocherchi/files/2020/09/fast_and_robust_mesh_arr.pdf
https://github.com/gcherchi/FastAndRobustMeshArrangements
This classic was recently made available!
An-Introduction-to-Ray-Tracing-The-Morgan-Kaufmann-Series-in-Computer-Graphics-.pdf
Cool!
Fast Exact Booleans for Iterated CSG using Octree-Embedded BSPs
https://arxiv.org/pdf/2103.02486.pdf
I think those researchers are working at the same place that puts out openflipper/openmesh - wonder if they could be enticed to look at applying that approach to our facetization problem...
https://github.com/potato3d/instancing
Fascinating.
https://web.cse.ohio-state.edu/~dey.8/paper/tcocone/tcocone.pdf
https://hal.inria.fr/hal-01596553/file/CAD17.pdf - says they have an open source implementation in GEOGRAM
humph. need to boil it down to the pieces we can use without license incompatible deps, and see if that's got enough to be of use. Oh well, a project for another day...
Note to future self - to trigger reconstruction, this is the command:
vorpalite profile=reconstruct t.xyz rebuild.obj
Works with default write output from pnts command.
This is an interesting one... looks like someone studied BRL-CAD and performed a UML relationship diagram deconstruction of the system: http://eprints.library.odeku.edu.ua/id/eprint/6217/1/VelykodniySS_Reengineering_open_software_2019.pdf
Sean said:
This is an interesting one... looks like someone studied BRL-CAD and performed a UML relationship diagram deconstruction of the system: http://eprints.library.odeku.edu.ua/id/eprint/6217/1/VelykodniySS_Reengineering_open_software_2019.pdf
Despite the goal of the paper (reengineering BRL-CAD), I think that light-weight UML diagrams could be useful for documenting the structure of the code.
By the way, my preferred UML tool is UMLet.
That professor actually wrote five papers on brl-cad
I'm not sure his intent was actually to re-engineer as much as it was to simply study it
Exchanging some messages with him, maybe get some collaboration going
From Siggraph 2021, interesting new approach to representation: https://github.com/duxingyi-charles/Boundary_Sampled_Halfspaces
Cool, I hadn't realized mistuba's newer implementations were using a liberal license now: https://github.com/mitsuba-renderer/mitsuba3
Wow, now that's impressive... https://github.com/XingangPan/DragGAN
Generating CAD models from engineering sketchs (and point clouds and meshes, and more):
https://www.eurchembull.com/uploads/paper/d17bb7ba2b4030974b08506d707c6a70.pdf
and https://www.semanticscholar.org/reader/57d71b30f2ff68bde4a0c50322bb93a5c3358ee0
not to forget this related gem from siggraph 2022: https://geometry.cs.ucl.ac.uk/group_website/projects/2022/free2cad/
all siggraph 2022 talks and papers are at https://blog.selfshadow.com
Fun writeup from someone that got a Siggraph paper accepted a couple years ago. Paper is even related to our work, implicit rendering on gpu: https://www.mattkeeter.com/projects/siggraph/
Sounds kind of interesting from a "point cloud scans to CAD" perspective: http://www.cs.columbia.edu/cg/deepcad/
starseeker said:
Sounds kind of interesting from a "point cloud scans to CAD" perspective: http://www.cs.columbia.edu/cg/deepcad/
That's definitely interesting to me! Not so much for the drafting approach, but the underlying generation. I think the same concept would extend well to CSG and/or NURBS.
Here's a couple more papers that just came out of CVPR 2024:
stable mobile-device SLAM algorithm:
https://spla-tam.github.io
constructing CAD from point clouds:
https://www.obukhov.ai/point2cad
Very interesting HPC-tuned math library: https://bitbucket.org/blaze-lib/blaze/src/master/
Has me thinking about the old effort to implement fastf_t as a c++ class, how this might be a means to simdify huge swaths of code.
That does look interesting. Heh - I suppose I'd get to find out where I've goofed and used double explicitly instead of fastf_t if we did something like that...
Sean said:
Very interesting HPC-tuned math library: https://bitbucket.org/blaze-lib/blaze/src/master/
Has me thinking about the old effort to implement fastf_t as a c++ class, how this might be a means to simdify huge swaths of code.
Does BRL-CAD uses something like this? Or what's behind BRL-CAD fast math calculation?
Last updated: Jan 09 2025 at 00:46 UTC