Software components done right?
2004-02-23 17:22:23.905007+00 by
Dan Lyke
5 comments
A nerdly question: I've been looking at the flaws of the .NET system
recently. We "solved" the one
mentioned a little over a week ago by setting up to run a call in
another thread and canceling it if it hadn't completed after a short
time, but if you don't trust the called components then threads have
issues, and there are all of the performance issues with .NET, and I
got to wondering:
Has anyone out there experimented with a component system that
communicates with chroot()
ed processes running under some
other UID through pipes? Is there a standard for this? I believe that
pipes are darned close to the speed of memcpy()
on modern
operating systems, a reasonable interface spec could also be expanded
to work between machines, it would be relatively easy to give each
component a different set of privileges, you'd have process level
separation, and could do things like set up the framework to have
timeouts on given calls.
And some basic autogenerated code could create interface modules for
different languages which would give you real inter-language
integration without trying to cram everything into the same underlying
VisualBasic framework.
[ related topics:
Microsoft Software Engineering
]
comments in ascending chronological order (reverse):
#Comment Re: made: 2004-02-23 17:55:03.12662+00 by:
aiworks
On Windows (yeah, Windows sucks), interprocess pipes are implemented as memory-mapped files (just about all that interprocess stuff is). So let me ask a question: is what your proposing any different than out of process COM+ with the security turned on?
In terms of speed (again Windows centric here), the memory copying is trivial compared to the signaling (mutexes, condition variables, etc...) that will happen to support the double-ended pipe. I'm sure you know that the signaling causes at least the running CPU to dump it's cache back to memory. That's really expensive. I remember doing profiling on Windows on this and a PII CPU could only generate something like 5000 signals per second. That's several orders of magnitude slower than an in-process call.
#Comment Re: what about shared memory made: 2004-02-23 18:34:51.48856+00 by:
flushy
[edit history]
if you're doing pipes, why not use shared memory instead? With pipes, you have the overhead of the IO framework (buffered rw/unbuffered rw, file descriptors, state management, etc).
Even if Interprocess PIPES are implemented as mmap files, you still have the IO overhead of the PIPE IO streams. You still have to come up (or use a library) that is common for all your languages so you can provide a way to translate objects from one format to another. You will do this for your pipes (basically converting the data to serial data). Why not do the same for shared memory segments?
The result will be the same I believe, but with much less overhead. You could even make it threadable using spin locks (I forget the article, but I believe mutex's were very expensive to do in Windows).
I'm not sure how security will come into play for this. To get total control, you may have to implement a localhost client/server model. If you were looking at pipes anyways, then perhaps a client/server model isn't too much more of a thought stretch.
I guess it matters what you want most : security or performance.
#Comment Re: made: 2004-02-23 20:12:37.499108+00 by:
Dan Lyke
I actually haven't looked at implementation issues on Windows yet, since for all of its flaws .NET works fairly okay over there. I was actually thinking about frameworks on Un*x that'd leapfrog that.
I remember the pain of pipes on Windows from my days working on Net RenderMan, but you guys are right that if I pursue this I should be thinking cross-platform.
#Comment Re: made: 2004-02-23 20:59:24.004449+00 by:
aiworks
You know I have to get my crufty sounding voice on and talk about an existing implementation of components that works very well.
Yes, boys and girls gather 'round and let me tell you about IMS and CICS running on MVS. You see, IMS and CICS are transaction processing systems. In this context, transaction means unit of work. Transactions start life in response to an action (i.e. user screen submitted, scheduled timer pop, etc...) or by another transaction (which can spawn it synchronously or asynchronously under CICS). When a transaction is spawned it is given resource consumption targets that will fail the transaction if met (clock cycles, memory, storage, etc...).
There's isolation inside of a transaction (one transaction can't screw with another one). Furthermore because of MVS architecture, component libraries (procs) have two very important rules they live by:
-ALL resources that a proc consumes are properly closed (by MVS) after the proc returns. This means datasets are closed, memory is freed, database connections are closed, etc...
-A proc may ONLY access memory that is passed to the proc. There is no possibility of caller memory corruption.
Now, put these two together. A transaction is run in response to a user action. That transaction spawns another transaction synchronously and gives it resource constraints. That transaction either succeeds or fails and the result (and reason on failure) is passed back to the original transaction. There is no possibility of memory/resource corruption and it all is running at in-process speed. A transaction failing is both reported to the user and automatically logged for a sys admin to look at. In the case of resource constraints, the user retrying the transaction will probably take care of the problem for that user (while the sys admin is still aware of a broader problem).
Neither UNIX nor Windows can do this at as low a level as MVS does (J2EE starts to approach this, but all of the tools aren't quite there). It's because of this that IT departments in sleepy insurance carriers and banks can create mainframe based systems that never break but can't do the same on other platforms as consistently.
The mainframe has a lot of warts (espcially around culture), but there's some fascinating technology up there. It seems that most big architectural problems I run into were long ago solved on the mainframe.
#Comment Re: made: 2004-02-24 17:33:33.888804+00 by:
Dan Lyke
Thanks for reminding me of that, Mark. I've long liked the idea of "programming by contract", but it's only been recently that languages (as opposed to less granular systems) have started to catch up to the idea.
And I believe that most kernels have hooks for trapping resource use stuff, so a framework that supports this (with some implicit defaults, which was my big complaint with MVS/JCL) for function call or object instance granularity, and that's efficient enough that it could be used for interactive applications, might rock.
Or it might just be needless complexity.