(von Tiesenhausen's Law of Engineering Design) If you want to have a maximum effect on the design of a new engineering system, learn to draw. Engineers always wind up designing the vehicle to look like the initial artist's concept.
(Patton's Law of Program Planning) A good plan violently executed now is better than a perfect plan next week.
(de Saint-Exupery's Law of Design) A designer knows that they have achieved perfection not when there is nothing left to add, but when there is nothing left to take away.
Any run-of-the-mill engineer can design something which is elegant. A good engineer designs systems to be efficient. A great engineer designs them to be effective.
Capabilities drive requirements, regardless of what the systems engineering textbooks say.
Any exploration program which "just happens" to include a new launch vehicle is, de facto, a launch vehicle program.
(alternate formulation) The three keys to keeping a new human space program affordable and on schedule:
1) No new launch vehicles.
2) No new launch vehicles.
3) Whatever you do, don't develop any new launch vehicles.
Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure.
To save you the trouble of wading through 45 paragraphs to find the thesis, I'll give an informal version of it to you now: Any organization that designs a system...
organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
I'd also like to point out that unlike every single horror I've ever
witnessed when looking closer at SCM products, git actually has a simple
design, with stable and reasonably well-documented data structures. In
fact, I'm a huge proponent of designing your code around the data, rather
than the other way around, and I think it's one of the reasons git has
been fairly successful (*).
So it's easy enough to just write whatever Java code or something to just
access the databases yourself. The object model of git may be smart, but
it's neither proprietary nor patented. I suspect it's often a lot easier
to integrate git into other projects _that_ way, rather than try to
actually port the code itself.
Linus
(*) I will, in fact, claim that the difference between a bad programmer
and a good one is whether he considers his code or his data structures
more important. Bad programmers worry about the code. Good programmers
worry about data structures and their relationships.
The Fairbairn threshold is the point at which the effort of looking up or
keeping track of the definition is outweighed by the effort of rederiving
it or inlining it.
The term was in much more common use several years ago.
[...]
The primary use of the Fairbairn threshold is as a litmus test to avoid
giving names to trivial compositions, as there are a potentially explosive
number of them. In particular any method whose definition isn't much longer
than its name (e.g. fooBar = foo . bar) falls below the threshold.
The Fairbairn threshold is named after Jón Fairbairn, one of the original Haskell committee members.
for each desired change, make the change easy (warning: this may be hard), then make the easy change
Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.
All problems in computer science can be solved by another level of indirection
a term originated by Andrew Koenig to describe a remark by Butler Lampson attributed to David J. Wheeler
We can solve any problem by introducing an extra level of indirection.
With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
Taken to its logical extreme, this leads to the following observation, colloquially referred to as “The Law of Implicit Interfaces”: Given enough use, there is no such thing as a private implementation. That is, if an interface has enough consumers, they will collectively depend on every aspect of the implementation, intentionally or not. This effect serves to constrain changes to the implementation, which must now conform to both the explicitly documented interface, as well as the implicit interface captured by usage. We often refer to this phenomenon as "bug-for-bug compatibility."
At this point, the interface has evaporated: the implementation has become the interface, and any changes to it will violate consumer expectations. With a bit of luck, widespread, comprehensive, and automated testing can detect these new expectations but not ameliorate them.
I think, in general, our software benefits when we use as few languages as possible, because programming languages have such powerful network effects. Embedded DSLs are usually strongly preferable to freestanding ones because we get to reuse so much knowledge and infrastructure.
Compositionality is the principle that a system should be designed by composing together smaller subsystems, and reasoning about the system should be done recursively on its structure.
Clearly interfaces are a crucial aspect of compositionality, and I suspect that interfaces are in fact synonymous with compositionality. That is, compositionality is not just the ability to compose objects, but the ability to work with an object after intentionally forgetting how it was built. The part that is remembered is the ‘interface’, which may be a type, or a contract, or some other high-level description. The crucial property of interfaces is that their complexity stays roughly constant as systems get larger.
In software, for example, an interface can be used without knowing whether it represents an atomic object, or a module containing millions of lines of code whose implementation is distributed over a large physical network.
For examples of non-compositional systems, we look to nature. Generally speaking, the reductionist methodology of science has difficulty with biology, where an understanding of one scale often does not translate to an understanding on a larger scale.
For example, the behaviour of neurons is well-understood, but groups of neurons are not.
More generally, I claim that the opposite of compositionality is emergent effects. The common definition of emergence is a system being ‘more than the sum of its parts’, and so it is easy to see that such a system cannot be understood only in terms of its parts, i.e. it is not compositional. Moreover I claim that non-compositionality is a barrier to scientific understanding, because it breaks the reductionist methodology of always dividing a system into smaller components and translating explanations into lower levels.
Design proprietary software as if you intended to open source that software, regardless of whether you will open source that software
The project began because I wanted to…
...
… not always be forced into a POSIX-y filesystem model. That involves thinking of where to put stuff, and most the time I don’t even want filenames. If I take a bunch of photos, those don’t have filenames (or not good ones, and not unique). They just exist. They don’t need a directory or a name. Likewise with blog posts, comments, likes, bookmarks, etc. They’re just objects.
It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures.
be conservative in what you send, be liberal in what you accept
2.10. Robustness Principle
TCP implementations should follow a general principle of robustness:
be conservative in what you do, be liberal in what you accept from
others.
One of the people that popularized the concept is Dan Olsen.
Olsen provides many examples of points in problem space: Taking notes in zero gravity, securely and quickly unlocking your phone, or filing your taxes to cite a few. We could add anything else that can be formulated as a need: Preventing a meltdown in a nuclear powerplant, finding the closest fast-food restaurant, or predicting next year’s fashion trend.
Any point in solution space corresponds to something you can build. A concrete product implementation. As engineering teams, we spend a lot of time finding points in solution space meaning designing, architecting, and engineering software products.
this is a classic example of the 'encapsulation vs abstraction' thing that programmers are constantly getting wrong
If you're hiding information, you're encapsulating. The opposite of encapsulation is "openness."
If you're providing choice, you're abstracting. The opposite of abstract is "concrete."
A value likex = 3 + 2
is concrete, whilex a = 3 + a
is abstract, specifically abstract ina
.
from a linguistics perspective, since a ton of people get it wrong, then it's descriptively okay-ish to useabstraction
to meanencapsulation
. but from a precise terminology perspective, where the two words are pretty different in meaning and practice, well, we should get it right
(especially since Haskellers tend to be awful about encapsulation but very good about abstraction!)
[E]ven Metcalfe's law understates the value created by a group-forming network [GFN] as it grows. Let's say you have a GFN with n members. If you add up all the potential two-person groups, three-person groups, and so on that those members could form, the number of possible groups equals 2n. So the value of a GFN increases exponentially, in proportion to 2n. I call that Reed's Law.
From David P. Reed's, "The Law of the Pack" (Harvard Business Review, February 2001, pp 23–4)
Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.
Heresy! The gospel (as advocated everywhere, including elsewhere in this book) is that domain and UI should be separate. In fact, it is difficult to apply any of the methods discussed later in this book without that separation, and so this SMART UI can be considered an “anti-pattern” in the context of domain-driven design. Yet it is a legitimate pattern in some other contexts. In truth, there are advantages to the SMART UI, and there are situations where it works best—which partially accounts for why it is so common.
"[...] But Admiral, all this emphasis on personnel and training is a terrific drain on us. You wouldn't believe how much time goes into it. It just isn't efficient use of all this high-powered technical talent you've recruited. Not to mention your own time".
"Efficiency isn't the objective, Dunford, effectiveness is. Don't confuse effectiveness with efficiency. I'm convinced that the only way to be effective, to make a difference in the real world, is to put ten times as much effort into everything as anyone else thinks is reasonable. It doesn't leave time for golf or cocktails, but it gets things done."
Uniform Metaphor: A language should be designed around a powerful metaphor that can be uniformly applied in all areas.
Examples of success in this area include LISP, which is built on the model of linked structures; APL, which is built on the model of arrays; and Smalltalk, which is built on the model of communicating objects. In each case, large applications are viewed in the same way as the fundamental units from which the system is built.
Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.
-- From The Jargon File: www.catb.org/jargon/html/Z/Zawinskis-Law.html