Someone I know pointed me at an interesting article about 'russophobia' and the long and conflicted history of US public opinion of Russia. One of the points being made was that complaining about Trump being under Russian influence is pretty rich given that we essentially placed Yeltsin in power in Russia in the 90s. Either that article, or a podcast he pointed me at also expressed worry that the 'intelligence community' (who I will hereafter refer to as the 'deep state' without quotes) is taking it upon themselves to depose a democratically elected leader.

While I think the article makes for interesting and thoughtful reading, I disagree with their opinion about Trump. It's not because I disagree with their facts. It's that I disagree with their conclusions.

When we meddle in other countries, I think the other countries are perfectly justified in trying to undo the effects of our meddling, and attempts to remove our ability to further meddle in the affairs of their country. This opinion forms a backbone of my dislike of our military interventions.

Much the same as the citizens of the countries we try to meddle in, we have an obligation to ourselves and to each other to have our country be run by us, not some foreign government. Unless we collectively make an explicit decision to turn over the reigns to a foreign government, we should do everything in our power to make that not happen.

Of course, Trump was democratically elected according to the rules we've adopted for elections. And a large portion (though very definitely not a majority) of the electorate voted for him. As a democracy, we cannot simply ignore their interests. But the issue of our government being under the influence of a foreign one is orthogonal, and one that must be addressed separately.

It does disturb me that our own deep state has it in for Trump. It does make me skeptical of the information they provide. But in this case there appears to be a fair amount of corroboration. In general though, the way our deep state acts as our keepers rather than our servants in something that has disturbed me for a long time.

I do wonder though how many of our other leaders have had similar problems that the deep state has chosen not to reveal because it wasn't in their interests to do so. This is a whole different issue, and part of the massive problems we currently face as a nation. Problems that Trump is ill-equipped to solve, even if he wasn't a likely puppet of another (hostile) government.

I have an excellent a clear-cut example of the market can fail both workers and consumers in the service of owners.

Target is currently facing a fairly large petition to not open until a reasonable hour on Black Friday so its employees can actually spend Thanksgiving with family. You might think this is just a workers issue, but it isn't. Many consumers have signed this petition, and the general trend of retail outlets opening earlier and earlier on Black Friday is anti-consumer.

And here's why. Basically people show up early to make sure they can get certain gifts before they're sold out. For those people, it's very worth it to show up before the store opens to make sure they get their chosen gift.

But each store then has an incentive to open before the others. The first store that opens gets the lions share of those consumers.

But those consumers don't actually want to wake up at 3am to get to the store before it opens. They'd be much happier waking up at 6am, or even later. But because the stores are now competing with each other to open earlier, they are forced to choose between getting the gift they want or waking up extremely early to get to the store to make sure it's not out of stock.

The 'invisible hand' of the market creates a destructive cycle that's bad for everybody.

I'm not certain what a good solution to this problem is. One would hope a gentleman's agreement on a decent opening time would work. But I doubt it would. Especially since front-line store employees have little input into honoring these sorts of agreements.

But the point is here that the free market creates a situation in which both consumers and employees get the short end of the stick. They both end up in a situation neither of them desire.

One interesting phenomena I've noticed recently is a tendency to categorize something (and often dismiss it) based on plot mechanic. "The Hunger Games" has been compared to numerous other 'many enter, one leaves, and everybody watches' stories, especially ones involving children. "Limitless" gets compared to any other story involving medical intelligence enhancement and apparently "Flowers for Algernon" is the canonical example.

I find this sort of distressing. There is a great deal more to a movie than its plot mechanic. Plot is simply the skeleton of a story, not the most important part. It's true that if the skeleton has problems it has a serious negative effect on the whole story, but a story is not its skeleton.

"The Hunger Games", for example, is a story about severe oppression. The games are only a symptom of that oppression. They are certainly not the defining feature of that movie.

Anyway, this is just a minor rant. :-)

This problem exists in many more contexts than people might otherwise think of. I came to this realization recently while talking with someone about why I really did think her choice to shop around based on price was an admirable one even if I, personally didn't do it.

In that context, I'm a free rider. I reap the benefits of the people who do shop around because they create an incentive for merchants to lower prices. But I do not engage in behavior that will create those incentives myself because it's costly in terms of time and attention.

Similarly, people who use technology that's closed and locked down are free riders on people who consciously make choices to not use such technology. It can be argued that almost every innovative thing we've seen on the Internet in the last 10 years is a direct result of openness and a lack of concern, or even outright hostility towards the idea of 'intellectual property'. Oh, it's true that individual innovators sometimes try to achieve locks. But because of people like me, that generally causes their product to not succeed as well as those who don't. And once the open product achieves critical mass, network effects and the overwhelming advantages of openness do the rest and drive the close product to the fringes of the market.

I don't think free riders are actually necessarily bad. A significant number of them can make markets inefficient. But maybe then people don't actually care about those kinds of inefficiencies. If they did, they would make different choices.

But looking at these things as a free rider problem is a really interesting perspective. And I think the idea is much more broadly applicable than it has been.

It also explains why free riders will not necessarily kill the creation of new and interesting stuff. There are many parts of the market that thrive even when there are a significant number of free riders. When something in the market changes enough that people get upset over the inefficiency created, they stop being free riders.

I went to watch the much hyped movie today. I was prepared to be revolted, and I was. Not by the movie, or the story. It was well-told, powerful and moving.

I heard people chat idly about the theme appearing in other movies. I saw them smiling as they exited the theater, talking about the finer points of the plot. I saw them wearing nice clothes for an afternoon out.

In the movie I saw the people in The Capitol District chatting idly about the 'contestants'. I saw them smiling and cheering over the 'victory'. I saw them wearing their nicest clothes for the occasion.

I, for a long while, couldn't tell the difference.

That book (and the movie) were written as fiction. But I'm sure the author meant it as a mirror.

Do I vote for that one, or this one? Meaningless choices that we chatter about endlessly, trotting out our best justifications. Few brave enough to make the choice for what they want. The choices are an avoidance of risk, a choice based on fear, not on hope.

That was the choice presented to the two characters at the end of the movie. A choice they were encouraged to make based on fear. The whole system rigged for it. And they made the choice based on hope, the choice the system couldn't tolerate.

I feel like that's what our 'democracy' has degenerated to. A circus, a spectacle geared towards making each of us, individually, make a choice based on fear of what the other guy will do.

I was angry because of the movie. Upset, crying. The happy people around me... I didn't understand. A whole passel of children died on the screen. Horrible deaths, lives shortened needlessly in the service of the subjugation of a whole people.

It's a happy occasion. Time to put on your best stuff and chat idly about it with your friends. There is no mirror. There is no tragedy. The movie has no relevance beyond entertainment. A lie to cover the unbearable truth.

I was angry, I was saddened, and I was revolted.

Yeah, I know, preachy and overbearing. Listen to the message for a change instead of complaining about how it's presented. I too will go back to life as usual. But even a moment of solemnity and understanding of a shared predicament might have been nice.

IPv6 is supposed to solve all of the peer connectivity issues introduced by NAT. And, on the surface, it seems to do just that by making it possible to assign a unique, globally routable IP address to every conceivable device that could possibly want one.

But this doesn't really solve the problem of peer connectivity.

My cell phone, for example, may be assigned an address by my carrier. But my carrier may be unwilling to let me have any more addresses. This means that any devices I want to connect to the Internet through my cell phone will not be able to have globally routable addresses because my ISP/cell carrier won't route them. And, of course, under IPv6, nobody is ever supposed to do NAT.

So, peer connectivity is still restrained by network topology. The power to decide who gets to be a router decides what gets to connect. And this is broken.

IMHO, the solution is to have addresses assigned to things that have nothing to do with routing, and allow a routing layer on top of the network layer that can route things to those addresses regardless of the actual topology of the network. Tor is an example of this sort of thing. Tor is basically a routing layer on top of TCP/IP that's designed to obscure which routes any given piece of information takes.

But Tor is a specific example of a larger issue. Routing cannot be left ultimately controlled by anybody except network end-points. Such creates failure modes both physical and political that are significantly less than the best we can do.

Which is one of the biggest advantages to a protocol like CAKE. :-) It divorces routing from addressing and expects end-nodes to have a hand in making routing decisions.

Today, a comment I got really rankled me. My affection and desire for technologies that are not freedom hostile was called a 'religious issue'. This trivializes my desire, and makes it seem like someone has to 'drink the kool-aid' to think the issue is real. And that's insulting.

I find this particularly upsetting given how many people rallied to defeat SOPA. Do people not understand the end goal here? Do you really want your technologies to decide for you which websites you're allowed to see, what you can read, what you can hear? Because ignoring freedom when making technology choices is marching down that very road.

Oh, those companies, they'll never do that. But, they will. Maybe they don't even realize they will. But that kind of lockdown and control is so very economically attractive that companies will march there inexorably unless it's clear that's not a direction we want to go in.

And your choices affect me. Whenever you make a choice against freedom, you're affecting my ability to make that choice. It is possible to make technology that works and is convenient, but doesn't rob you of your freedom. But every time you vote with your dollars against such technology, every time you decide this feature or that feature is worth giving up some of your freedom, you're encouraging companies to dangle shiny toys in exchange for your freedom. In fact, you're encouraging them to only provide the shiny toys if you (and I) give up our freedom to get them. It's like giving in to a toddler who throws tantrums.

I recognize that different people make different choices for their own reasons. And I'm fine with them making those choices. But I will not pass up any opportunity to inform them of the effect of their choice on themselves, and on me.

I'm working on a small library to express computations in terms of composable trees of dependencies. These dependencies can cross thread boundaries allowing one thread to depend on a result generated in another thread. This is sort of a riff on the whole promise and future concept, but the idea is that you have chains of these with a potential fanout in the chain greater than 1. Kind of like the venerable make utility in which you express what things need to be finished before starting on the particular thing you're talking about.

But I'm not sure what I should call it. Maybe Teleo because it encourages to express your program in terms of a teleology.

I'm writing this basically because I've encountered the same problem on at least two different projects now, and it occurs to me that it would be really good to have a well-defined standard way of launching things in other threads and waiting for the results that suggested an overall program architecture. The projects I worked on were all set to develop a huge mishmash of different techniques that wouldn't necessarily play well together or be easy to debug.

I used to have a really good idea of what the architecture of a system that had to respond to multiple different possible sources of input or other reasons to do things (such as some interval of time expiring). My idea was basically to make everything purely event-driven and have big event loops at the heart of the program that dispatched events and got things done.

This solves the vexing problem of how to deal with all these asynchronous occurrences without incurring excessively complex synchronization logic. Nothing gives up control to process another event until the data structures its working with are in a consistent state.

But there are two problems with this model. One is old, and one is relatively new.

The old problem is that such event-driven systems typically exhibit inversion of control, and that makes them confusing and hard to follow. There are ways to structure your program to give people a lot of hints as to what's supposed to happen next when you give up control in the middle of an important operation only to recapture it again at some later point in time in a completely different function. But it's still not the easiest thing in the world to follow.

The 'new' problem is that silicon-based CPUs have not been getting especially faster recently. They've instead been getting more numerous. This is a fairly predictable result. CPUs have a clock. This clock needs to stay synchronized across the entire CPU. Once clock speeds exceed a certain frequency, the clock signal takes longer to propagate across the entire chip than the amount of time before the next pulse is supposed to happen. This means that in order to have an effectively faster CPU on a single chip you need to break it up into independent units that do not need to be strictly synchronized with each other. It's a state horizon problem.

But most programs are not designed to take advantage of several CPUs. If you want a program that's a cohesive whole, but still gets faster as the hardware advances, you need to break it up into several threads.

It seems like maybe it would be simple to do this with a program that had multiple threads. You just have multiple event loops. But then you end up with several interesting problems. How do you decide what things happen in which event loop? What happens if you need to have data shared between things running on different event loops? You run the risk of re-introducing the synchronization issues you avoided when you added the event loops in the first place, all with the cost of inversion of control. It doesn't seem worth it.

Additionally, if you have inter-thread synchronization, what happens if it takes awhile for the other thread to free up the resource you need? How do you prevent deadlocks? Most event systems do allow you to treat the release of a mutex or a semaphore as an event, so you can't just fold waiting for the mutex back into the system as just another event without doing some trick like spawning a thread that waits for the mutex and writes into some sort of IPC mechanism once it's acquired.

And splitting up your program into multiple event threads is not trivial either. How do you detect and prevent the case of one thread being overworked? Also, there is 'state kiting' to consider. Preferably you would prefer one CPU to be handling the same modifiable state for long periods of time. You want to avoid situations where first one CPU cache, then the next have to load up the contents of a particular memory region. Typically, each core will have its own cache. If for no reason other than efficient use of space, it would be good if each core had a disjoint set of memory locations in cache. And to avoid the latency of main memory access, it would be good if that set was relatively static. This means that a single event loop should be working with a fairly small and unchanging set of memory locations.

So simply having several threads, each with its own event loop seems a solution fraught with peril, and it seems like you're throwing away a lot of the advantages you went to an event driven system (with the unpleasant inversion of control side-effect) for in the first place.

So the original idea needs modification, or perhaps a completely new idea is needed.

One modification is embodied in the language Erlang. Erlang still has an event loop and inversion of control. You waiting for messages that come in on a queue. Any other loop can add messages to any queue it knows about. These messages are roughly analogous to events. But the messages themselves convey only information that is immutable. Since it is immutable, shared or not, no synchronization is required since it cannot change.

Erlang also encourages the creation of many such event loops, each of which does a very small job. Hopefully, no individual loop is too overloaded. Modern operating systems are adept at scheduling many jobs, and so this offloads the scheduling of all of these small tasks onto the OS.

I do not think Erlang does overly much to solve the locality of reference problem.

Another approach is the approach taken by the E programming language. It makes extensive use of a concept called a 'future' or 'promise'. This is a promise to deliver the result of some operation at some future point in time. It allows these promises to be chained, so you can build up an elaborate structure of dependencies between promises. In a sense, the programming language handles the inversion of control for you. You specify the program as if control flow were normal, but the language environment automatically launches as many concurrent requests as possible and suspends execution until the results are available.

It is possible to build a set of library-level tools in C++11 to implement this kind of thing somewhat transparently in that language.

I am unsure if there are any major tradeoffs in this approach. Certainly in C++ there is a great deal of implementation complexity, and that complexity cannot be completely hidden from the user as it is in E. I wonder if that implementation complexity introduces unacceptable overhead.

I also suspect that it may be difficult to debug programs that use this sort of a model. They appear to execute sequentially, but in truth they do not. It is possible, for example, to have two outstanding promises for bytes from a file descriptor, but which order those promises will be fulfilled in will not be readily apparent from reading the code. And error conditions can crop up at strange times and propagate to non-obvious places in the control flow of your program.

I also suspect this model will not exhibit the best locality of reference semantics. There will be a tendency to frequently spawn and join threads to handle asynchronous requests. And it will not be immediately apparent to the OS CPU scheduler which threads need to work with which memory objects. And this may lead to active state kiting between CPUs.

Also, those calls to create and destroy threads have a cost, even if that cost is fairly small, it's still likely much more expensive than acquiring an unowned mutex, and probably even more expensive than the call to wait for a file descriptor readability event or waiting for a briefly held mutex to become available.

Of course, it may be possible to implement all of this without creating many threads given a sufficiently clever runtime environment that implements its own queue that folds IO state and semaphore/mutex state events into a single queue. Such an environment would still need a lot of help from the application programmer though to divide up the application to maximize locality of reference within a single thread.

This is a fairly long ramble, and I'm still not really sure what the best approach is. I think I may try to set up some kind of 'smart queue'. This queue will have a priority queue of runnable tasks, and a queue of tasks that could potentially execute given a set of conditions. When a condition is met, the queue will be informed, and if that conditions enables one or more tasks to be run, these tasks will be added to the priority queue.

I envision that the primary thing on which the priority queue will be prioritized is length of time since the task was added to the 'wait for condition' list.

I can then write a C++11 library that will allow you to automatically turn any function that returns a promise into a function that uses these conditions to split up its execution. At least, if you use sufficient care in writing the function.

The conditions (since fulfilling a promise will be a possible condition) will have data associated with them. If this data involves shared mutable state, that will require a great deal of extra care.

Random rambling and noodling about a CAKE implementation issue )

I've been paying a lot of attention to bitcoin recently. It's a fascinating idea, and I'm really curious as to where it will go. But reading the comments on the Internet about it is even more interesting, though also kind of upsetting. People say the most ridiculous and stupid things, and it's all out of nearly violent emotion. I don't really understand.

Some people say ridiculous things like "It can only go up!" (in reference to the USD/Bitcoin exchange rate) or "It can't fail!". Optimism beyond the point of sanity. Bitcoin can fail. It can fail if it turns out that nobody wants to accept it. Currency that nobody will trade anything for is just as useful as a small piece of paper, and in bitcoin's case, even less useful. And that's a very definite possible future of bitcoin.

People also go through all kinds of logical contortions to declare it a scam. But it doesn't fit the definition of a ponzi scheme any more than any other currency does, nor does it fit the definition of a pyramid scheme at all. The closest it comes to is a hot tech stock. And nobody calls those scams unless they accuse them of 'pump and dump'. But 'pump and dump' doesn't fit the profile of most people who are interested in bitcoins and are trading them either.

And then people declare it valueless, as if any currency (even gold) has any intrinsic value beyond people's willingness to trade stuff for it.

Very few people talk about the worthiness of the cryptography. But even the ones who do paint either incredibly rosy pictures or ridiculous apocalyptic scenarios, neither of which really approach the truth of things.

I just find the way people ignore any reason and base their opinions on pure emotion to be kind of upsetting. And I notice this in a lot of arguments. But the arguments over bitcoin are almost comical in just how incredibly intense this phenomena is. The only thing that makes it not comical is that you realize these people are deadly serious.

I think a lot of people have a lot of unexamined hang-ups about the meaning of money. It's deeply tied to their fundamental beliefs about politics, ethics, morality, and even self-worth. I think most people are terribly unequipped to tease these things apart and examine them separately. Money is 'magic'. People do not see it as the societal cooperation tool that it is.

I think, perhaps, that is one of the most valuable parts of the bitcoin project. Its nature provides a handle or a window for examining money as a societal and organizational tool. I suspect most people won't be able to take advantage of this, but I suspect many will, and our society will become richer for it.

They want to charge me $40/yr per domain for secondary DNS! $40/yr! This is completely ridiculous. With the volume of lookups I get, I could probably host all the domains on my own server on a DSL line if I wanted.

Is anybody out there willing to provide secondary DNS for a few domains for me? I'm willing to cough up the equivalent of $10/yr in bitcoins for the service if you really want.

I've been puzzling over a minimal and orthogonal set of properties for a session. I at first thought there were 3:

Message boundaries preserved
Whether or not your messages are delivered in discrete units, or whether they are delivered as a stream of bytes in which the original sizes of the send calls bear no relevance to how the bytes are chunked together on the other end.
Ordered
Whether or not data arrives in the order you sent it
Reliable
Well, this has a tricky definition. For TCP it means that failure to deliver is considered a failure of the underlying connection. But after such a failure you can't really be sure about exactly which bytes were delivered and which weren't.

But, as is evidenced by my description of 'reliable', these properties are not as hard-edged as they might seem. I also thought about latency, for example a connection via email is relatively high latency, and a connection between memory and the CPU is generally pretty low latency. But I'm looking for hard-edged, yes/no type properties that are in some sense fundamental. Latency seems like a property that's rather fuzzy. It exists on a continuum, and isn't really a defining feature of a connection, something that would drastically alter how you wrote programs that used the connection. In an object model, it would be an object property, not something you'd make a different class for.

But I find TCP's notion of 'reliability' very curious. It isn't really, in any sense, particularly reliable. I've had ssh connections that died, but when I reconnect to my screen session, I discover that a whole bunch of the stuff I was typing made it through, it just wasn't echoed back.

It also interacts with 'ordered' in an odd way. It might make sense to have an unordered connection that was 'reliable', but what does that really mean then? If it's a TCP notion of reliability, you could just deliver the last message and have the connection drop. Also, what would it mean to have an unreliable, but ordered connection? Would that mean you could send a bunch of messages and have only the first and last ones delivered? And would it make any sense at all to have an unordered, unreliable connection in which message boundaries were not preserved?

So I've come up with a different division...

Message boundaries preserved
Whether or not your messages are delivered in discrete units, or whether they are delivered as a stream of bytes in which the original sizes of the send calls bear no relevance to how the bytes are chunked together on the other end.
Ordered
Whether or not data arrives in the order you sent it
Must not drop
This means that if a message does not make it through, the connection is considered to be in an unrecoverable error state, and no further messages may be sent. Though you may not know which message didn't make it through.
Delivery notification
Whether or not you can know that a message made it to the other side or not.

These are not fully orthogonal. For example, if message boundaries are not preserved, then, in order for a connection to be in the least sensible, it must also have the 'ordered' and 'must not drop' properties. Also, if you must not drop messages, I'm not sure that it would then be sensible to have out-of-order delivery.

One of the rules of the system I'm designing is that any property that is not required may be provided anyway. This makes non-orthogonality much easier to deal with. So the prior cases aren't really a problem.

Can any of you think of a better set of properties, or important properties that I left out?

Some good discussion also happens in this Google Buzz post that mirrors this entry.

Suicide is so common in Chinese iPad factories that the company has taken to forcing prospective employees to sign no-suicide pacts.

Talk about treating the symptom instead of the disease.

A friend of mine has pointed out that this story is made to seem a lot worse than it really is. In particular the suicide at Foxconn plants is much lower than it is at other similar facilities in China. He is also not much of a fan of Apple the company, so he doesn't have a fanboy bias. I'm not completely sure I agree with this way of looking at things, but here is what he wrote, so you can make up your own minds:

This story has been highly sensationalized. The reality is almost exactly the opposite of what you read.

  1. Eighteen Foxconn employees committed suicide in 2010 [1]... out of 920,000 workers [2]. That's a rate much lower than the Chinese average of 66 per million [3], which itself is like half of the American average of 111 per million [3].
  2. Apple is just one of many Foxconn clients. Others include Amazon (Kindle), Intel, Dell, Nintendo, Sony, Samsung, and many others [2]. Apple products are a small minority of Foxconn's output, yet the media calls them the "iPad factory". This is obviously intended to sensationalize the story -- scandal involving Apple is much more interesting that scandal involving Samsung.

I suspect that Foxconn came up with these no-suicide pledges in a desperate attempt to placate the media, and due to cultural differences they don't understand that to the American audience it only makes them look worse.

Building codes serve a few functions. The most important one is safety. But another is ensuring that your home does not fall to pieces in 10 years (after the builders are long gone) by forcing certain minimum standards of construction.

To the latter end, I think building codes for multi-unit dwellings should require that each and every single unit have a single fiber drop in the unit. I assume there are standards for phone hookups today (and possibly cable), and the fiber standard would have a very similar purpose and structure.

CAKE reached a new milestone early this morning. It now successfully both generates and parses messages that use the new protocol. It also successfully detected a re-used session id. I also think the code that does this is also a lot better designed than the old code was. It's easier to see how to put it in the context of a larger system that implements a node that speaks the protocol

It's also much more extensively tested at a deeper level with tests that are designed to document the inner workings of the system.

Overall, it's in a much better state than I left it when I sort of stopped working on it much in 2004. And I'm going to handle the hard problems first, how to maintain the relationship between sessions and transports, and having two way realtime conversations between nodes. This rather than concentrating on the messages that will be traded back and forth at a higher level (which will be done using protobuf). That can come later, especially since I'm not likely to get it right the first time anyway.

I also need to think about getting nodes to participate in a DHT to share assertions (like how to reach a particular node) in a distributed way.

Lastly, the protocol has something of a problem with 'liveness' because I designed it with the idea of conversations being able to be initiated without any round trips. There are some mitigation for this problem in session ids, but that mitigation is somewhat problematic because it requires the recipient of a conversation initiation to keep track of some stuff for everybody who tries to talk to it.

I'm not really sure how to handle the 'liveness' problem though and still preserve the lack of round trips property. I could require that session ids contain an 'hour number' or something similar. Though that introduces a requirement for at least very coarse grain time synchronization for all nodes.

Memory

Feb. 25th, 2011 12:16 pm

Memory is stored in so many places. A sea shell contains the memory of the organism that made it. Its trials and tribulations are recorded in the layers of material it deposited. Since it was unable to make a meaningful decision based on these memories, we hesitate to call them so, but our scientists eagerly read them, read the memories in whole stratas of seashells, the memories of entire ecosystems.

We implicitly recognize this when we say something like "this house is full of memories". Every nick and change, unnoticeable by some, tells a tale of something that happened there. The patterns of wear on the floor, the neglected dusty corners tell tales as well.

Forensics is the art of reading memories from these structural changes. Reading memory from these things we hesitate to call memory because they are not immediately accessible to a living process. But memories they are.

We have a collective memory too. The most obvious and directly accessible is books. But we have memories in our cities, in our tools, in the structures both great and small. They are like mankind's seashells.

We think of ourselves as relatively self contained. We are divided from the world by the interface of our immediate perceptions. But that division is fuzzy and indistinct. We are much larger than our bodies. And much of our memory lives outside our heads.

I have been working on a serialization framework I'm happy with for Python. I want to be able to describe CAKE protocol messages clearly and succinctly. This will make it easier to tweak the messages without having to rip apart difficult to understand code. It will also make it easier to understand if I drop the project again and then come back to it years later, or if (by some miracle) someone else decides to help me with it.

This is a very long post. )

I have a problem for which protocol buffers seem like a good solution, but I'm reluctant to use them. First, protocol buffers include facilities for handling the addition of new fields in the future. This adds a small amount to a typical protocol buffer message, but it's a facility I do not need.

Also, I feel the variable sized number encoding is less efficient than it could be, though this is a very minor issue. I also feel like I have a number of special purpose data types that are not adequately represented.

I'm also not completely pleased with the C++ and/or Python APIs. I think they contain too many googlisms. I would like to see public APIs published that were free of adherence to Google coding standards like do-nothing constructors and no exceptions.

I think, maybe, I will be using protocol buffers for some messages that are sent by applications using CAKE as a transport/session layer. These include some of the sub-protocols that are required to be implemented by a conforming CAKE implementation.

On a different note, I think Google's C++ coding standards are lowering the overall quality of Open Source C++ code. This isn't a huge effect, but it's there.

It happens because Google's good name is associated with a set of published standards for C++ coding that include advice that while possibly good for Google internally is of dubious quality as general purpose advice. It also happens because when Google releases code for their internal tools to the Open Source community, these tools follow Google's standards. And some of these standards have the effect of making it hard to use code that doesn't comply with those standards in conjunction with code that does.

Normally XKCD is amusing for very positive reasons. But I frequently feel a lot like the guy with the beard in this cartoon. It's really frustrating. So, today's XKCD is darkly amusing to me. Freedom is such a hard sell before people lose it. People choose convenience every time, frequently until it's almost too late to fix the problem all the while berating the people who were worried in the first place.

Infrastructures
Page generated Jun. 16th, 2025 08:46 am
Powered by Dreamwidth Studios