One interesting phenomena I've noticed recently is a tendency to categorize something (and often dismiss it) based on plot mechanic. "The Hunger Games" has been compared to numerous other 'many enter, one leaves, and everybody watches' stories, especially ones involving children. "Limitless" gets compared to any other story involving medical intelligence enhancement and apparently "Flowers for Algernon" is the canonical example.

I find this sort of distressing. There is a great deal more to a movie than its plot mechanic. Plot is simply the skeleton of a story, not the most important part. It's true that if the skeleton has problems it has a serious negative effect on the whole story, but a story is not its skeleton.

"The Hunger Games", for example, is a story about severe oppression. The games are only a symptom of that oppression. They are certainly not the defining feature of that movie.

Anyway, this is just a minor rant. :-)

IPv6 is supposed to solve all of the peer connectivity issues introduced by NAT. And, on the surface, it seems to do just that by making it possible to assign a unique, globally routable IP address to every conceivable device that could possibly want one.

But this doesn't really solve the problem of peer connectivity.

My cell phone, for example, may be assigned an address by my carrier. But my carrier may be unwilling to let me have any more addresses. This means that any devices I want to connect to the Internet through my cell phone will not be able to have globally routable addresses because my ISP/cell carrier won't route them. And, of course, under IPv6, nobody is ever supposed to do NAT.

So, peer connectivity is still restrained by network topology. The power to decide who gets to be a router decides what gets to connect. And this is broken.

IMHO, the solution is to have addresses assigned to things that have nothing to do with routing, and allow a routing layer on top of the network layer that can route things to those addresses regardless of the actual topology of the network. Tor is an example of this sort of thing. Tor is basically a routing layer on top of TCP/IP that's designed to obscure which routes any given piece of information takes.

But Tor is a specific example of a larger issue. Routing cannot be left ultimately controlled by anybody except network end-points. Such creates failure modes both physical and political that are significantly less than the best we can do.

Which is one of the biggest advantages to a protocol like CAKE. :-) It divorces routing from addressing and expects end-nodes to have a hand in making routing decisions.

I'm working on a small library to express computations in terms of composable trees of dependencies. These dependencies can cross thread boundaries allowing one thread to depend on a result generated in another thread. This is sort of a riff on the whole promise and future concept, but the idea is that you have chains of these with a potential fanout in the chain greater than 1. Kind of like the venerable make utility in which you express what things need to be finished before starting on the particular thing you're talking about.

But I'm not sure what I should call it. Maybe Teleo because it encourages to express your program in terms of a teleology.

I'm writing this basically because I've encountered the same problem on at least two different projects now, and it occurs to me that it would be really good to have a well-defined standard way of launching things in other threads and waiting for the results that suggested an overall program architecture. The projects I worked on were all set to develop a huge mishmash of different techniques that wouldn't necessarily play well together or be easy to debug.

I used to have a really good idea of what the architecture of a system that had to respond to multiple different possible sources of input or other reasons to do things (such as some interval of time expiring). My idea was basically to make everything purely event-driven and have big event loops at the heart of the program that dispatched events and got things done.

This solves the vexing problem of how to deal with all these asynchronous occurrences without incurring excessively complex synchronization logic. Nothing gives up control to process another event until the data structures its working with are in a consistent state.

But there are two problems with this model. One is old, and one is relatively new.

The old problem is that such event-driven systems typically exhibit inversion of control, and that makes them confusing and hard to follow. There are ways to structure your program to give people a lot of hints as to what's supposed to happen next when you give up control in the middle of an important operation only to recapture it again at some later point in time in a completely different function. But it's still not the easiest thing in the world to follow.

The 'new' problem is that silicon-based CPUs have not been getting especially faster recently. They've instead been getting more numerous. This is a fairly predictable result. CPUs have a clock. This clock needs to stay synchronized across the entire CPU. Once clock speeds exceed a certain frequency, the clock signal takes longer to propagate across the entire chip than the amount of time before the next pulse is supposed to happen. This means that in order to have an effectively faster CPU on a single chip you need to break it up into independent units that do not need to be strictly synchronized with each other. It's a state horizon problem.

But most programs are not designed to take advantage of several CPUs. If you want a program that's a cohesive whole, but still gets faster as the hardware advances, you need to break it up into several threads.

It seems like maybe it would be simple to do this with a program that had multiple threads. You just have multiple event loops. But then you end up with several interesting problems. How do you decide what things happen in which event loop? What happens if you need to have data shared between things running on different event loops? You run the risk of re-introducing the synchronization issues you avoided when you added the event loops in the first place, all with the cost of inversion of control. It doesn't seem worth it.

Additionally, if you have inter-thread synchronization, what happens if it takes awhile for the other thread to free up the resource you need? How do you prevent deadlocks? Most event systems do allow you to treat the release of a mutex or a semaphore as an event, so you can't just fold waiting for the mutex back into the system as just another event without doing some trick like spawning a thread that waits for the mutex and writes into some sort of IPC mechanism once it's acquired.

And splitting up your program into multiple event threads is not trivial either. How do you detect and prevent the case of one thread being overworked? Also, there is 'state kiting' to consider. Preferably you would prefer one CPU to be handling the same modifiable state for long periods of time. You want to avoid situations where first one CPU cache, then the next have to load up the contents of a particular memory region. Typically, each core will have its own cache. If for no reason other than efficient use of space, it would be good if each core had a disjoint set of memory locations in cache. And to avoid the latency of main memory access, it would be good if that set was relatively static. This means that a single event loop should be working with a fairly small and unchanging set of memory locations.

So simply having several threads, each with its own event loop seems a solution fraught with peril, and it seems like you're throwing away a lot of the advantages you went to an event driven system (with the unpleasant inversion of control side-effect) for in the first place.

So the original idea needs modification, or perhaps a completely new idea is needed.

One modification is embodied in the language Erlang. Erlang still has an event loop and inversion of control. You waiting for messages that come in on a queue. Any other loop can add messages to any queue it knows about. These messages are roughly analogous to events. But the messages themselves convey only information that is immutable. Since it is immutable, shared or not, no synchronization is required since it cannot change.

Erlang also encourages the creation of many such event loops, each of which does a very small job. Hopefully, no individual loop is too overloaded. Modern operating systems are adept at scheduling many jobs, and so this offloads the scheduling of all of these small tasks onto the OS.

I do not think Erlang does overly much to solve the locality of reference problem.

Another approach is the approach taken by the E programming language. It makes extensive use of a concept called a 'future' or 'promise'. This is a promise to deliver the result of some operation at some future point in time. It allows these promises to be chained, so you can build up an elaborate structure of dependencies between promises. In a sense, the programming language handles the inversion of control for you. You specify the program as if control flow were normal, but the language environment automatically launches as many concurrent requests as possible and suspends execution until the results are available.

It is possible to build a set of library-level tools in C++11 to implement this kind of thing somewhat transparently in that language.

I am unsure if there are any major tradeoffs in this approach. Certainly in C++ there is a great deal of implementation complexity, and that complexity cannot be completely hidden from the user as it is in E. I wonder if that implementation complexity introduces unacceptable overhead.

I also suspect that it may be difficult to debug programs that use this sort of a model. They appear to execute sequentially, but in truth they do not. It is possible, for example, to have two outstanding promises for bytes from a file descriptor, but which order those promises will be fulfilled in will not be readily apparent from reading the code. And error conditions can crop up at strange times and propagate to non-obvious places in the control flow of your program.

I also suspect this model will not exhibit the best locality of reference semantics. There will be a tendency to frequently spawn and join threads to handle asynchronous requests. And it will not be immediately apparent to the OS CPU scheduler which threads need to work with which memory objects. And this may lead to active state kiting between CPUs.

Also, those calls to create and destroy threads have a cost, even if that cost is fairly small, it's still likely much more expensive than acquiring an unowned mutex, and probably even more expensive than the call to wait for a file descriptor readability event or waiting for a briefly held mutex to become available.

Of course, it may be possible to implement all of this without creating many threads given a sufficiently clever runtime environment that implements its own queue that folds IO state and semaphore/mutex state events into a single queue. Such an environment would still need a lot of help from the application programmer though to divide up the application to maximize locality of reference within a single thread.

This is a fairly long ramble, and I'm still not really sure what the best approach is. I think I may try to set up some kind of 'smart queue'. This queue will have a priority queue of runnable tasks, and a queue of tasks that could potentially execute given a set of conditions. When a condition is met, the queue will be informed, and if that conditions enables one or more tasks to be run, these tasks will be added to the priority queue.

I envision that the primary thing on which the priority queue will be prioritized is length of time since the task was added to the 'wait for condition' list.

I can then write a C++11 library that will allow you to automatically turn any function that returns a promise into a function that uses these conditions to split up its execution. At least, if you use sufficient care in writing the function.

The conditions (since fulfilling a promise will be a possible condition) will have data associated with them. If this data involves shared mutable state, that will require a great deal of extra care.

Random rambling and noodling about a CAKE implementation issue )

I've been puzzling over a minimal and orthogonal set of properties for a session. I at first thought there were 3:

Message boundaries preserved
Whether or not your messages are delivered in discrete units, or whether they are delivered as a stream of bytes in which the original sizes of the send calls bear no relevance to how the bytes are chunked together on the other end.
Ordered
Whether or not data arrives in the order you sent it
Reliable
Well, this has a tricky definition. For TCP it means that failure to deliver is considered a failure of the underlying connection. But after such a failure you can't really be sure about exactly which bytes were delivered and which weren't.

But, as is evidenced by my description of 'reliable', these properties are not as hard-edged as they might seem. I also thought about latency, for example a connection via email is relatively high latency, and a connection between memory and the CPU is generally pretty low latency. But I'm looking for hard-edged, yes/no type properties that are in some sense fundamental. Latency seems like a property that's rather fuzzy. It exists on a continuum, and isn't really a defining feature of a connection, something that would drastically alter how you wrote programs that used the connection. In an object model, it would be an object property, not something you'd make a different class for.

But I find TCP's notion of 'reliability' very curious. It isn't really, in any sense, particularly reliable. I've had ssh connections that died, but when I reconnect to my screen session, I discover that a whole bunch of the stuff I was typing made it through, it just wasn't echoed back.

It also interacts with 'ordered' in an odd way. It might make sense to have an unordered connection that was 'reliable', but what does that really mean then? If it's a TCP notion of reliability, you could just deliver the last message and have the connection drop. Also, what would it mean to have an unreliable, but ordered connection? Would that mean you could send a bunch of messages and have only the first and last ones delivered? And would it make any sense at all to have an unordered, unreliable connection in which message boundaries were not preserved?

So I've come up with a different division...

Message boundaries preserved
Whether or not your messages are delivered in discrete units, or whether they are delivered as a stream of bytes in which the original sizes of the send calls bear no relevance to how the bytes are chunked together on the other end.
Ordered
Whether or not data arrives in the order you sent it
Must not drop
This means that if a message does not make it through, the connection is considered to be in an unrecoverable error state, and no further messages may be sent. Though you may not know which message didn't make it through.
Delivery notification
Whether or not you can know that a message made it to the other side or not.

These are not fully orthogonal. For example, if message boundaries are not preserved, then, in order for a connection to be in the least sensible, it must also have the 'ordered' and 'must not drop' properties. Also, if you must not drop messages, I'm not sure that it would then be sensible to have out-of-order delivery.

One of the rules of the system I'm designing is that any property that is not required may be provided anyway. This makes non-orthogonality much easier to deal with. So the prior cases aren't really a problem.

Can any of you think of a better set of properties, or important properties that I left out?

Some good discussion also happens in this Google Buzz post that mirrors this entry.

Building codes serve a few functions. The most important one is safety. But another is ensuring that your home does not fall to pieces in 10 years (after the builders are long gone) by forcing certain minimum standards of construction.

To the latter end, I think building codes for multi-unit dwellings should require that each and every single unit have a single fiber drop in the unit. I assume there are standards for phone hookups today (and possibly cable), and the fiber standard would have a very similar purpose and structure.

Profile

Lover of ideas

February 2017

S M T W T F S
   1234
567891011
121314151617 18
19202122232425
262728    

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 13th, 2025 08:22 am
Powered by Dreamwidth Studios