Hey folks
I've been playing around with BRL-CAD lately on my setup and started building something I thought you guys might find interesting.
I have put together a working local prototype of an MCP (Model Context Protocol) server for BRL-CAD. Right now, it uses a non-blocking Tcl socket bridge to pipe geometry commands into a live MGED session. I hooked it up to an LLM via LangGraph, so I can literally just type something like "make a 15mm cylinder and subtract it from that sphere," and the BRL-CAD GUI updates live with the boolean math. (Right now I've been playing with boolean operations and spheres, but could definitely index more tools with time).
I know one of the long-term goals for BRL-CAD is improving the UX and lowering the learning curve. Letting users interact with the CSG engine using natural language feels like a really fun way to tackle that. Also, I noticed that tools like FreeCAD and OpenSCAD already have active MCP integrations out there, but BRL-CAD doesn't seem to have one yet. It feels like a missed opportunity to get BRL-CAD plugged into the new AI agent ecosystem.
I'm really interested in applying for GSoC this year. I know an MCP integration isn't currently listed on the official problem statements, but would the mentors be open to considering a proposal around this? My goal for the summer would be to turn this Tcl socket prototype into a production-grade interface, perhaps integrated with the main GUI as a model-agnostic MCP interface with a BYOK system.
WhatsApp Video 2026-02-24 at 20.44.18.mp4
Hi @Raghav Sharma and thanks for the outstanding introduction. Short answer is yes.
Longer answer is that is something that I’ve been actively working and thinking about, albeit with manual scaffolding to explore how practical and effective it can be. So yes you should definitely submit your ideas and we should continue to discuss. The potential is tremendous for UX but also as a means for discovery and productivity. The details of how and what skills/tools/features and data integrity will need to be developed as well.
Love custom ideas like this that align.
Aha I'm glad that's the case @Sean
I'd love to discuss the tools and processes with a mentor while I continue to build upon this scaffold and before I start drafting a proposal. Things like integrations with the mged CLI/GUI window, memory persistence and a priority list for tool integration. Would it be possible to perhaps have a call some time?
I've been building upon this in the meanwhile. I have added support for Sphere, Cylinder and Box creation along with basic boolean operations
I have also added a soft fallback to the user in case of missing parameters (If i ask the AI to make a cylinder of radius 5, without specifying the height or the location, it would give me a message asking me to specify the missing parameters before sending it to the socket bridge)
image.png
I am also handling command formatting in the MCP layer itself so that there are no weird hallucinated commands going to the editor. (though I need to add some sort of try-catch validation here for invalid values coming from the LLM before they hit the editor, or to evaluate errors raised by the editor after the command goes through)
@Raghav Sharma I would love to set up a call to talk in more depth. We have held open meetings with all contributors before and may do that again this year as well. It really though often boils down to motivation, having a plan, and determination. If you’re excited about what you’re doing others often will be excited also.
For mcp integration, I wonder if you could make a tool robust enough to discover how it needs to issue commands, maybe by querying the help system or running commands like “in” interactively where it describes each.
(As a means for mining hallucinations also)
Having an error handler sounds like a great idea regardless so it can hopefully recover given expected situations.
Sean said:
Raghav Sharma I would love to set up a call to talk in more depth. We have held open meetings with all contributors before and may do that again this year as well. It really though often boils down to motivation, having a plan, and determination. If you’re excited about what you’re doing others often will be excited also.
@Sean That sounds great! I really am excited to build this and I am in the process of drafting the technical architecture for this. Please let me know whenever you'd be available for it, I'd love to hop on some time to discuss the details and make sure it aligns with BRLCAD's goals.
Sean said:
For mcp integration, I wonder if you could make a tool robust enough to discover how it needs to issue commands, maybe by querying the help system or running commands like “in” interactively where it describes each.
Ah yes I was thinking about this too.
I was initially thinking of building a standard MCP for this (ones the likes of slack or postgres currently use, with static defined tools) but something like that might not work with BRLCAD as it has a large number (around 400+) commands that would have to be indexed that way (each having different parameters too) which would be a pain to index manually and would absolutely blow up the context window of the LLMs
So I was in the process of redefining it as a Dynamic MCP with a discovery and error handling API to actively fetch context from terminal responses for tool selection, query formatting and errors being thrown.
MCPs with requirements like this rely on a ReAct (Reason + Act) sequence which is actually pretty achievable (with our Agent ingesting man pages on the fly to understand commands in runtime, or using ls to query existing elements in the database). I'll try to get a prototype of this approach before our call.
We could also do some visual self-correction stuff with VLMs (Visual LLM Models) too, with the Agent actually getting visual context. The agent could just run rt commands to output PNGs in different camera angles to visually verify things like boolean ops and then self-correct the coordinates if needed
Though it would be more of an experimental thing we could try after building the base MCP
successfully built a prototype where the agent finds the commands it needs to execute via a resource sheet and can automatically query man pages for their usage
The basic flow is:
the agent first checks the query, goes through a bunch of predefined tools (that I had to define for some tougher syntactic or sequential operations like boolean ops)
Then if the agent does not find them in predefined tools, it loads up a resource (imagine a cheatsheet) with a list of available commands with 1 liner descriptions
if the agent sees something relevant, it opens up the man page for that command, learns how to execute it, and then runs it in the CLI
If there is any error, it goes back to the agent via the socket, the agent reads the error and then corrects the method to run the operation (Done in an incremental loop upto a set number of executions, in this case the limit is set to 5, just so we don't run into a recursive infinite loop)
(In this case, an error occured, with the LLM seeing that a sphere of the name sphere.s already exists, so it went back and changed the name and re-executed the command)
Its pretty rough around the edges right now, but I could certainly refine it across the GSoC developement period
You can also ask about any commands
It will automatically query the help pages and tell you about it
An example where the agent finds the necessary command, reads its documentation, and executes it autonomously while also answering a user question about the file location
@Raghav Sharma Definitely showing some promise in a quick span of time. What would be good to articulate in your proposal is just how far would you plan on taking it? What sorts of user stories do you envision achieving? What about possible RAGification of the tutorials or curated examples of the mcp tools being defined so it has various patterns to leverage? Would it be able to be an instructive agent that helps teach as it works? What sort of and how much base knowledge would be good to have defined in advance? What about making it work offline usably well enough with a local model? Or via some online service, but without consuming credits? Just a few questions like that to think about and help scope what it would and would not attempt to achieve under GSoC. Obviously can't do everything but could probably get a lot done at the pace you're demonstrating. Think about how to make it deployable to users. Think about what tech stack changes are assumed. Think about multiple platforms. Think about maintainability/extensibility, etc.
hello! i cant install brlcad on arch linux, i cant find PKGBUILD
Last updated: Mar 11 2026 at 01:08 UTC