03:46.07 |
*** join/#brlcad IriX64
(n=IriX64@bas3-sudbury98-1168048487.dsl.bell.ca) |
09:06.04 |
*** join/#brlcad b0ef
(n=b0ef@084202024060.customer.alfanett.no) |
16:35.43 |
Maloeran |
Merry monday and a happy new week! |
16:43.51 |
Maloeran |
On this special day, let's hope and
rationalize on dreams of harmony between managers and programmers
in the troubled regions of our world |
16:47.36 |
``Erik |
and happy hellidays and all that :) |
16:49.20 |
``Erik |
oh, btw, from some quick&dirty testing,
you're in the neighborhood of 40x faster (I haven't done a REAL
benchmark comparison, just pulled some quick numbers... different
geometry, but I think it's reasonably similar in occlusion and
complexity)... bear in mind, you'll slow down once you put in hooks
for distributed |
16:49.43 |
Maloeran |
40 times faster than the old libRT? |
16:49.51 |
``Erik |
I'm rigging up a fbsd box with a funny
compiler and X in a funny place, just to see what happens when I
try to build all the ports |
16:49.52 |
``Erik |
adrt |
16:50.02 |
Maloeran |
40 times faster than adrt? What
the... |
16:50.06 |
``Erik |
librt gets 30krps on a good day, heh |
16:50.50 |
Maloeran |
I'm writing state synchronisation at the
moment, for distributed processing. Distributed processing
shouldn't be too much of a hit with (very) good bandwidth |
16:50.50 |
``Erik |
quick and dirty numbers.. may be a whole order
of magnitude off ;) I was looking at some old scalability graph
info |
16:51.16 |
``Erik |
how good is "(very) good"? gigE? ib?
myri? |
16:51.24 |
``Erik |
or does 100base count? |
16:51.47 |
Maloeran |
It all depends of the task, how much data
there is to send back to the master node ; just raw pixels, or
intersection coordinates and so on? |
16:52.04 |
Maloeran |
Raw pixels shouldn't scale too bad with some
compression |
16:52.49 |
Maloeran |
I'm still working on state synchronisation, so
that all operations on the state of the master node is propagated
to the other nodes ; any new node can connect to the master at any
time too, and its state is sync'ed |
16:52.50 |
``Erik |
ummmm, I'm not sure... the end application
kinda needs in and out coordinates, with their component |
16:53.17 |
Maloeran |
But what for? Can't it use these coordinates
on the remote node, and just send back the result? |
16:53.53 |
Maloeran |
Sending results of computations based on the
raytracing is clearly _much_ lighter, usually |
16:53.54 |
``Erik |
it's supposed to be integrated easily with
another app... which expects a segment list |
16:54.05 |
``Erik |
obviously, but it has to talk with a brain
dead app |
16:54.10 |
Maloeran |
Can't this other app run its "shader" code
remotely? |
16:54.23 |
``Erik |
nope |
16:54.30 |
``Erik |
:/ it's retarded |
16:54.32 |
Maloeran |
Transfering raw raytracing results will kill
performance badly |
16:54.38 |
Maloeran |
Gah! Rewrite that :p |
16:54.40 |
``Erik |
it tries to be the center of the
universe |
16:54.44 |
``Erik |
not mine to rewrite... heh |
16:54.57 |
``Erik |
the, uh, horror project was an attempt to
rewrite it |
16:55.04 |
Maloeran |
Oh, I see |
16:55.05 |
``Erik |
brlcad isn't dumb enough to touch it
;) |
16:55.29 |
``Erik |
we did put some of the, um, application into
adrt and got really good results |
16:56.00 |
``Erik |
naturally, that'll be something to try down
the road with rayforce... but the way we got the pointy hairs to
sign off and throw money was by talking the retarded language of
the retarded... people... |
16:56.01 |
``Erik |
:) |
16:56.20 |
Maloeran |
Eh, typical :) |
16:56.33 |
``Erik |
that's the real world for ya, dude
:( |
16:56.37 |
Maloeran |
The code lying on top of rayforce must be
fixed to be distributed too, seriously |
16:56.50 |
``Erik |
um |
16:56.54 |
``Erik |
it, uhhh, sorta kinda is... |
16:56.56 |
Maloeran |
You can't distribute half of the processing
and expect good results, transfering all half-way results back to
the master node |
16:57.06 |
``Erik |
but it was done by the same dude who did the
distributed processing for the hell project |
16:57.23 |
``Erik |
so the scalability goes to about 2 nodes... 3
nodes costs more th an 1 |
16:57.24 |
Maloeran |
That's what I'm writing state sync'ing for in
mind, intelligent use of the library |
16:57.29 |
``Erik |
from what I'm told |
16:57.46 |
Maloeran |
Ah, sounds like my model prep threads :), I'll
fix that though |
16:57.50 |
``Erik |
(and the hell project... 2 nodes costs more
than 1) |
16:57.55 |
Maloeran |
Ahahahaha |
16:57.59 |
``Erik |
except the app is almost totally
distributable... |
16:58.14 |
Maloeran |
That is so wrong |
16:58.25 |
``Erik |
the "hard part" that he couldn't figure out
was ordering the results for the output... and, y'know... dir...
catch 'em out of order and bin them in a tree or
something |
16:59.08 |
``Erik |
it's a sad state of affairs |
16:59.13 |
``Erik |
but, y'know, fuck it, I'm on
vacation |
16:59.25 |
Maloeran |
I have no idea what the horror project is
actually meant to do, but it really has to be properly re-written,
in real programming languages by competent people |
17:01.27 |
``Erik |
hm, doesn't even need to be properly
re-written, or in a real programming language... I did a day hack
on librt that was outrunning the original C version and the new
java version by several orders of magnitude... |
17:01.36 |
``Erik |
using... librt... the slow csg one...
:D |
17:01.37 |
Maloeran |
Seriously, I'm writing state synchronisation
for intelligent use of the library, where the user will run
"shaders" remotely and return packed high-level results ; this is
not low-level distributed processing, where rays are traced
remotely and results returned |
17:01.46 |
Maloeran |
That would use soooo much bandwidth, it's
unthinkable |
17:02.32 |
``Erik |
I'm thinking when I get back to the office,
I'll have to write a lame 'workalike' to the retarded app and wire
rf and adrt into it |
17:02.48 |
``Erik |
something I can give you so you can see what
data needs moved around |
17:03.37 |
``Erik |
<-- doesn't go back until the 9th
though |
17:04.19 |
Maloeran |
I see, okay. "Vacation" or "work" are pretty
much the same to me |
17:05.00 |
``Erik |
used to be for me... *shrug* |
17:05.19 |
``Erik |
I went and got old... I have personal projects
to do in my 'vacation' time :) |
17:05.58 |
Maloeran |
Ah such pretexes, I'm sure it's just that the
work projects aren't interesting enough :) |
17:05.58 |
``Erik |
generally not little "tweak it for a few %
gain" stuff, but good old forward thinking stuff... gotta keep it
very seperate, so if I decide to try to make some $'s, there's no
issues ;) |
17:06.45 |
``Erik |
and, yeah, I steer towards very high level
languages... harder to tweak, but hard problems become easy and
impossible ones become tractable O:-D |
17:07.28 |
Maloeran |
Pfft :), assembly gets so easy to debug with
some practice *cough* |
17:08.07 |
Maloeran |
I look forward to writing assembly pipelines,
eventually, I want my extra 20-30% |
17:08.18 |
``Erik |
yeah, but take a skilled person in asm vs a
skilled person in, say, scheme or lithp... or smalltalk... or
erlang... or ml... |
17:08.30 |
``Erik |
give a task, see who has a working solution
first |
17:09.07 |
``Erik |
if I can do in a few weeks what'd take a
decade in asm, fuck, I'll do it in a few weeks... and the problems
that interest me tend NOT to be cpu bound |
17:09.09 |
``Erik |
:) |
17:09.35 |
Maloeran |
:) Sure, I know |
17:09.51 |
``Erik |
<-- exploring huge scheduling stuff with
hierarchal notions and dependancies |
17:10.01 |
Maloeran |
Even for "high-level" tasks, I hardly move
away from C though, it's just too fluent in comparison to my
Lisp |
17:10.13 |
``Erik |
and adequate graph reduction to keep the
working set tiny |
17:10.30 |
``Erik |
obviously you know that fluency can only be
gained and retained by exercise :) |
17:10.58 |
Maloeran |
I know :), but C has the upper hand in
performance, and I'm not sure Lisp would be that much faster to
write |
17:11.10 |
``Erik |
in that case, you should write fortran
code |
17:11.15 |
Maloeran |
Since I already got so much C code I reuse for
everyhing related to memory management, and so on |
17:11.22 |
``Erik |
heh |
17:11.41 |
``Erik |
lithp does its own memory management... your C
is superfluous. |
17:11.52 |
Maloeran |
I would bet mine is faster |
17:11.59 |
``Erik |
mebbe |
17:12.12 |
``Erik |
lisp compilers tend to make pretty tight
memory pools |
17:12.38 |
``Erik |
I wouldn't be surprised if your memory stuff
was fairly similar to a lot of memory stuff in lisp, scheme, perl,
etc |
17:13.06 |
``Erik |
you might have an advantage by JUST pooling
and not doing gc |
17:13.07 |
Maloeran |
Perhaps so, but the memory management part is
solved either way |
17:13.31 |
``Erik |
well, actually, you do reference
counting |
17:13.42 |
``Erik |
so technically, you do have gc... you just
blow up if you go cyclic |
17:13.48 |
``Erik |
blow up or permanently l eak |
17:14.14 |
Maloeran |
That's a code flaw easily tracked and
fixed |
17:15.01 |
Maloeran |
Understood, hence why C performs better and
why I use it |
17:15.13 |
Maloeran |
If we had Lisp chips, I might well switch
over |
17:15.24 |
``Erik |
be interesting to see a high level language
designed by someone with intimate knowledge of modern hw |
17:15.38 |
``Erik |
C is very tightly bound to the pdp11 chip,
dude |
17:15.55 |
Maloeran |
Personally, I use whatever language maps to
the underlying hardware well, delivering proper performance and
control |
17:16.11 |
``Erik |
lisp was pretty rocking on certain pdp's where
"complex" operations were single clock |
17:16.18 |
``Erik |
like, car/cdr pairs |
17:16.22 |
``Erik |
just a register access |
17:16.27 |
Maloeran |
car/cdr? |
17:16.31 |
``Erik |
cons? one load |
17:16.44 |
``Erik |
umm, yeah? |
17:16.54 |
``Erik |
uhhhhhhhh, "head" and "tail"? |
17:17.08 |
``Erik |
(car '(a b c)) -> a |
17:17.13 |
Maloeran |
Ah yes, as in Lisp |
17:17.16 |
``Erik |
(cdr '(a b c)) -> '(b c) |
17:17.21 |
Maloeran |
I was thinking of assembly instruction
names |
17:17.26 |
``Erik |
they, uh |
17:17.27 |
``Erik |
are |
17:17.31 |
``Erik |
assembly instruction names |
17:17.32 |
``Erik |
... |
17:17.36 |
``Erik |
on the pdp1 |
17:17.39 |
Maloeran |
*nods* Not on the archs I know :) |
17:17.41 |
``Erik |
or was it 8 |
17:18.22 |
Maloeran |
Really, your position is that Lisp would be
great if the chips were meant for it, and I don't contest
that |
17:18.22 |
``Erik |
basically addressing like ah and al out of an
ax, if you can stomach my archaic 16b 386 terminology |
17:18.33 |
Maloeran |
But reality is a bit different these
days... |
17:18.42 |
``Erik |
my position is ALSO that C would be great if
the chips were meant for it |
17:18.47 |
``Erik |
and I don't think the chips are meant for
it |
17:19.01 |
Maloeran |
Chips are a lot closer to C than Lisp, at
least |
17:19.16 |
``Erik |
I'm not so sure about that |
17:19.21 |
Maloeran |
C with GCC's built-in pseudo-functions, C
extensions and intrinsics is fairly decent |
17:19.35 |
``Erik |
naive implementations of lisp and C, the c
will probably come out a fair bit better |
17:20.11 |
``Erik |
but it's a translation problem, one that is
unfortunately being worked on by more C people than other language
people |
17:20.11 |
Maloeran |
Compilers aren't known to ever do a great job,
no matter the language |
17:20.13 |
``Erik |
*shrug* |
17:20.34 |
``Erik |
and cpu run time is kinda a fairly minor
aspect of the cost of computing, anyways |
17:21.11 |
Maloeran |
That's highly variable, but I always played
with cpu intensive code, personally |
17:21.28 |
``Erik |
so you're in an odd niche :) |
17:21.44 |
Maloeran |
I'm fine with that :) |
17:22.12 |
``Erik |
most code these days sits around with its
thumb up its ass waiting for the stupid human to respond |
17:22.27 |
``Erik |
and another large bulk of code is ran very
infrequently, maybe once ever... |
17:22.53 |
``Erik |
spending developer time doing petty
bookkeeping with C or asm is... illogical in those
situations |
17:23.03 |
Maloeran |
Agreed, of course |
17:23.35 |
``Erik |
use something that gets a working product to
the machine as quickly as possible... unfortunately, too many
people lock themselves into a certain tract of programming and
don't explore adequately... |
17:24.03 |
``Erik |
too many java programmers don't know jack shit
about C, so they don't understand how to use the machine in funny
ways to make things easy and simple |
17:24.10 |
Maloeran |
I'm interested by computers for doing intense
processing for simulations or other number crunching, pretty
bookkeeping does not interest me the slightest |
17:24.37 |
``Erik |
and too many C programmers never gain a strong
fluency in something like lithp, so they never understand the fu of
real macros or full number towers |
17:25.12 |
``Erik |
dude, you write a memory mgmt library...
you're trying to work to abstract the petty bookkeeping |
17:25.13 |
``Erik |
:) |
17:25.22 |
Maloeran |
This elegance can get in the way of efficiency
too |
17:25.26 |
``Erik |
and walking right into greenspuns 10th law in
the process |
17:25.32 |
``Erik |
*shrug* |
17:25.47 |
``Erik |
I'd rather write a program really quickly in a
high level language... |
17:25.55 |
``Erik |
figure out how I can make the algorithms
better to make it faster |
17:26.05 |
``Erik |
and THEN start reducing the 'expensive' parts
to lower languages |
17:26.11 |
``Erik |
like portabe pdp assembly, er, uh, I mean,
C |
17:26.11 |
Maloeran |
It isn't always about processor time
efficiency, there are Java programs eating gigabytes of
ram |
17:26.23 |
``Erik |
heh, true... that's just... wrong |
17:26.38 |
``Erik |
java is an excellent example of how to do
everything wrong |
17:26.45 |
Maloeran |
Eheh, exactly |
17:26.46 |
``Erik |
almost as bad as c# |
17:27.43 |
``Erik |
<-- notes that lisp lived in a land where
4k of ram was considered huge, with heavy computation theory
background... calling java up as a counter argument is just a low
blow and wrong |
17:28.10 |
Maloeran |
Ahah |
17:28.36 |
``Erik |
but as justin likes to point out, I'm very
much on the 'computer science' aspect and not so much on the
engineering side... I dig reading up on algorithms, and I know some
about church, g�del, turing, ... |
17:28.36 |
``Erik |
:) |
17:28.56 |
Maloeran |
Ew.. Yes I noticed that. You'll find me weird,
but I'm not comfortable with any language where I can't be sure
what assembly the compiler will spit out |
17:29.26 |
Maloeran |
I like writing C, look at any chunk of
assembly and know exactly where I am in the software |
17:29.37 |
``Erik |
you can see the output of lithp in asm or
machine code if you want |
17:29.51 |
Maloeran |
Sure, I don't think I'm neglecting algorithmic
optimisations |
17:29.54 |
``Erik |
lithp is primarily a compiled language, if all
else fails, hit it with a decompiler |
17:30.05 |
``Erik |
and I use some scheme compilers that output
C |
17:30.06 |
``Erik |
*shrug* |
17:30.14 |
``Erik |
don't confuse the language with the evaluation
mechanism :D |
17:31.07 |
``Erik |
(of course, chicken's C output is eye bleeding
horrible, heh... good&naive... gcc doesn't seem too upset about
it, though) |
17:32.07 |
Maloeran |
Eh now, the output of properly written C isn't
that bad :) |
17:32.37 |
Maloeran |
Compilers remain stupid, but considering the
amount of work that has been put in GCC, I don't expect other
non-gcc languages to perform better |
17:33.19 |
``Erik |
heh, but I could put a trivial amount of
effort into an assembler |
17:33.28 |
``Erik |
and it could perform better, provided a
competent assembly programmer |
17:33.30 |
``Erik |
:) |
17:33.47 |
``Erik |
and I still view assembyl as fairly compiled,
I used to do mnems on the c64 o.O |
17:34.23 |
Maloeran |
Indeed, but Lisp is farther than C from
assembly considering the current hardware ; more work for the
compiler = poorer code |
17:34.37 |
``Erik |
I don't know about that |
17:35.03 |
Maloeran |
So much work put in GCC, yet it just seems to
stupid sometimes... I really have the impression I could write
better |
17:35.17 |
Maloeran |
It isnt the optimisation that bothers me, it's
all the higher-level parsing and stuff |
17:35.38 |
Maloeran |
so* stupid |
17:35.42 |
``Erik |
I'm not big on common lisp... but lisp 1.5 has
almost every single language component being a single fast opcode,
I think |
17:36.03 |
``Erik |
scheme has a good deal of that... but the way
it's all written these days... :/ |
17:36.29 |
``Erik |
a compiler to bytecode and a biteocde
interpreter... written in C... usually not very well.. |
17:36.36 |
``Erik |
which doesn't map cleanly to the
machine |
17:36.58 |
``Erik |
mebbe if I get time, I'll try to write a tight
scheme->ml compiler for amd64 or something :) |
17:37.07 |
``Erik |
ml as in machine language, not sml or
ocaml |
17:37.33 |
Maloeran |
I'm secretly pleased that processor speeds are
hitting a ceiling, perhaps people will rediscovere the value of
efficient languages |
17:37.40 |
Maloeran |
rediscover* |
17:38.11 |
``Erik |
nah, the notion of vectorization is coming
back into fad... |
17:38.17 |
``Erik |
can't make 'em faster, so mkae more of
'em... |
17:38.25 |
``Erik |
pentium6 now with 1024 cores! |
17:38.38 |
Maloeran |
There's a big problem with that : it doesn't
scale |
17:38.49 |
``Erik |
vector computers in the 70's could do 4x4
matrix mults in one clock :/ |
17:39.06 |
Maloeran |
The more cores you have, the more in-cache
synchronisation you require, it gets messy |
17:39.14 |
``Erik |
it doesn't scale because: hw sucks. and
programmers suck. |
17:39.14 |
``Erik |
:) |
17:39.26 |
Maloeran |
Yes, instruction-level vectorization is great,
but that's fairly low-level |
17:39.37 |
Maloeran |
Hence the added value to all low-level
languages |
17:39.53 |
``Erik |
only cuz the compiler writers... well... suck
:D |
17:40.10 |
Maloeran |
Pfft, C has got all I need on that aspect
:) |
17:40.55 |
Maloeran |
Our current x86/amd64 architectures are soo
not meant to scale by adding new cores/processors |
17:41.25 |
``Erik |
definitely not |
17:41.54 |
``Erik |
I d'no much about amd64, but the x86 is a
grotesque pile of shit with hacks built on it, shoulda died in the
70's |
17:42.07 |
Maloeran |
Which isn't a bad thing : we will be forced to
leave x86 behind definitely, I hope! |
17:42.33 |
``Erik |
ppc even has cruft and lameness built on, but
it's *SO* much nicer |
17:42.46 |
Maloeran |
I want my arrays of 256 processors at 400mhz
with a proper architecture to scale |
17:43.05 |
``Erik |
I enjoyed the 6510 monitor/mnems... hated 386
asm... but really really liked r2k asm |
17:43.25 |
``Erik |
'proper arch' like numa? |
17:43.33 |
``Erik |
hypercube? |
17:43.39 |
``Erik |
or something 'new'? |
17:43.48 |
Maloeran |
Numa works somewhat, but I don't think it
scales too well at a point |
17:43.49 |
``Erik |
smp seems awful coarse |
17:44.09 |
``Erik |
and if we have a metric assload of cores, why
not go assymetric? |
17:44.14 |
Maloeran |
For each processor, the cache synchronisation
circuitry keeps growing with the total count of
processors |
17:44.23 |
Maloeran |
Exactly |
17:44.27 |
``Erik |
yeah, I dedicated one of my 128 procs to
manage the data motion... but... y'know? so what? |
17:45.25 |
Maloeran |
Personally, I would be an advocate of
software-based memory and cache synchronisation |
17:45.53 |
Maloeran |
Let the programmer, the software manage memory
; it's too much complex circuitry for the hardware, it can't
scale |
17:46.00 |
``Erik |
I d'no.. hw mmu's made vm pracical |
17:46.03 |
``Erik |
practical |
17:46.31 |
``Erik |
and in the 60's, ibm's cpu's were microcode vm
beasties, and amdhal made custom chips that smoked teh ibm things
bigtime |
17:47.00 |
``Erik |
(and yes. I really really dig computer
history. A lot. I don't think you can really move forward until you
REALLY understand the past.) |
17:47.45 |
Maloeran |
I agree with MMU, I'm just saying the software
should explicitgely do "put X into that large shared memory bank so
other processors will access it" |
17:48.03 |
Maloeran |
Rather than have the other processors ask
"Hey, has anyone got that in their cache? Is that copy
up-to-date?" |
17:48.16 |
Maloeran |
explicitely* |
17:48.52 |
Maloeran |
Each processor with its own memory, one or
several shared memory banks, perhaps different levels |
17:49.40 |
Maloeran |
Software would have to be written differently
in all aspects related to memory management, but that would scale
as well as it can get |
17:59.26 |
Maloeran |
Dumb example : 256 processors, each got its
memory bank X, each group of 16 processors has a shared bank Y, and
a bank Z on top of all Y. All processors can DMA to/from the shared
memory banks asynchroneously |
18:03.13 |
``Erik |
hrm, y and z seem... silly... ever built
hw? |
18:04.41 |
Maloeran |
How else would you scale shared
memory? |
18:05.10 |
``Erik |
(actually,if you look at an mmu on an smp
system... each alu has its cache... if the data it needs isn't in
its cache... it asks the next level... whihc, y'know, migh be l2 or
might be main memory... or might by disk drive... so I guess it
already does that, heh) |
18:05.18 |
``Erik |
but every time you write to memory |
18:05.23 |
``Erik |
it has to tell the l1 |
18:05.25 |
``Erik |
and then the l2 |
18:05.28 |
``Erik |
and then main memory |
18:05.33 |
``Erik |
until there's a shared universal vm |
18:05.58 |
Maloeran |
But what if the up-to-date cache line isn't in
main memory but in another processor cache? |
18:06.06 |
``Erik |
um |
18:06.07 |
Maloeran |
That's a big issue on the
non-scalability |
18:06.10 |
``Erik |
if... |
18:06.13 |
``Erik |
you write... memory... |
18:06.19 |
``Erik |
it has to... IMMEDIATELY to all the way
out |
18:06.36 |
``Erik |
and the 'all the way out' (universal vm) cant'
have other things dicking with it at the time |
18:07.00 |
``Erik |
which is why you need a machine with multiple
cores, so you can feel the pain firsthand :D |
18:07.26 |
Maloeran |
Got 2 cores with shared cache, eh well. I'll
get something soon |
18:07.53 |
``Erik |
it'd be rare that two cores with shared cache
stomp on eachother TOO much |
18:07.57 |
Maloeran |
Trying to get SURVICE to switch over to direct
deposit to avoid the 1 month delay for U.S. check deposit, then I
could get it in a few days |
18:08.20 |
``Erik |
something with multiple cores... something
SLOW with multiple cores would help exasperate the issue |
18:08.28 |
Maloeran |
They still have to ensure coherency, due to
the hardware rather than software synchronisation |
18:08.50 |
``Erik |
mmu has such a notion :) |
18:08.56 |
``Erik |
it's the gatekeeper of memory |
18:08.56 |
Maloeran |
If the hardware was to expect the software to
explicitely state when something must imperatively be shared, we
wouldn't have that problem |
18:10.05 |
Maloeran |
And a big memory bank with too many cores
playing in it can't please any memory controller |
18:10.20 |
Maloeran |
Hence the idea of a bank per processor, plus
shared banks |
18:11.48 |
Maloeran |
Hardware synchronisation makes it easy for the
programmers, but it isn't friendly to hardware scalability at
all |
18:13.11 |
``Erik |
hw isn't magic, dude.. fo rthe most part, it
just does what the os says |
18:13.46 |
Maloeran |
Hardware synchronisation between cpu caches
isn't up to the OS |
18:13.54 |
``Erik |
I mean, yeah, throw a lock, it goes down to
the mmu |
18:13.57 |
``Erik |
and it's reserved |
18:14.06 |
``Erik |
uhmmm, no, it's facilitated by the
os |
18:14.28 |
``Erik |
it's up to the threading capability...
pthreads in your case :) |
18:14.46 |
``Erik |
if it was all up tto the hw, you'd never have
locking issues or funky multi-threading bugs |
18:15.02 |
``Erik |
throw the mmu lock by grabbing a mutex or
something |
18:15.08 |
``Erik |
dick with the memory |
18:15.30 |
``Erik |
reads might be short, but iirc, writes are
long |
18:16.04 |
``Erik |
hrm |
18:16.12 |
``Erik |
I'd have to re-read the material to
remember |
18:17.08 |
``Erik |
been too long :) |
18:17.14 |
Maloeran |
Perhaps so should I, but from my current
knowledge, the current cache synchronisation between processors is
a huge problem for scalability |
18:19.01 |
Maloeran |
The more total processors and memory banks you
have in a Numa design, the more hypertransport links _each_
processor requires |
18:19.15 |
``Erik |
ok, hypertransport is newer |
18:19.36 |
``Erik |
but in old smp, the main memory was the
primary information bridge... (the universal vm, actually... might
be in swap) |
18:20.05 |
``Erik |
so when you write, it has to fall all the way
through to main memory... |
18:20.15 |
``Erik |
hrm |
18:20.18 |
``Erik |
now I'm confusing myself |
18:20.19 |
``Erik |
o.O |
18:21.53 |
Maloeran |
We should write some raytracing hardware to
clear things up, and accidentally design the future's memory
model |
18:22.26 |
``Erik |
hrm |
18:22.36 |
``Erik |
ingo et al may've beaten you to that |
18:22.37 |
``Erik |
:) |
18:23.11 |
``Erik |
dr ingo wald... has a co iirc |
18:23.39 |
``Erik |
openrt |
18:23.49 |
Maloeran |
Ah, doesn't mean we can't do better
:) |
18:24.08 |
``Erik |
heh |
18:24.14 |
``Erik |
one thing I learned a while back |
18:24.20 |
``Erik |
ther'es always someone there to do
better |
18:25.06 |
Maloeran |
Is that an excuse not to do anythingy?
:) |
18:25.11 |
Maloeran |
anything, rather |
18:25.40 |
``Erik |
of course not |
18:25.52 |
``Erik |
it's a reason to always do the best you can
quickly, and always look for new horizons |
18:26.06 |
``Erik |
stagnation will finish you :) |
18:26.17 |
``Erik |
always strive to learn more, do more, be
more... |
18:26.17 |
``Erik |
:) |
18:27.19 |
Maloeran |
Or just strive to enjoy life, hoping these
will naturally come as side-effects |
18:27.39 |
``Erik |
perhaps |
18:27.40 |
``Erik |
heh |
18:27.47 |
``Erik |
honestly, that may be the path I'm more o
n |
18:27.50 |
``Erik |
more on |
18:27.53 |
``Erik |
<-- moron o.O |
18:28.14 |
``Erik |
or perhaps my goals are slightly less
grandiose |
18:30.04 |
Maloeran |
To me, raytracing hardware would be really fun
and new, any other objectives are a pretext |
18:30.33 |
Maloeran |
Pretexes that management might prefer to "It
looks fun!" |
18:31.39 |
Maloeran |
Plus, we would have abundant time to argue
about scalable memory models |
19:08.48 |
Twingy |
I want an Official Red Ryder Carbine-Action
Two-Hundred-Shot Range Model Air Rifle! |
19:52.57 |
*** join/#brlcad DanielFalck
(n=dan@pool-71-111-98-172.ptldor.dsl-w.verizon.net) |
23:25.19 |
Maloeran |
I think it's the first time I can ride a
bicycle wearing just a shirt to attend christmas social
gatherings |