As promised, the second part of "8 Design Principles of Information Protection! (part 1 is here) Last time I looked at Economy o f mechanisms, Fail-safe default, complete design and open design principles. Today I'm walking you though the remaining 4 principles.
5. Separation of Privilege
We break up a single privilege among multiple components or people such that there should be a collective agreement between those components or people to perform the task controlled by the privilege. Multi-factor authentication is a good example for this; for example, in addition to some biometric authentication (what you are) such as finger print, iris, etc. the system may require you to provide an ID (what you have) to gain access to it. A good day-to-day example is a multiple signature check. As a side note, operating systems like OpenBSD implemented separation of privilege in order to step up security of the system.
Secure multi-party computations in cryptography (first introduced by Andrew C. Yao in his 1982 paper; the millionaire problem) is related to this concept in that without the participation of all the participants you cannot perform the required computation; this scheme has the added benefit that no participant learn more about the secrets that other participants have.
Secure secret sharing (it is different from shared secret like symmetric key) in cryptography is another good example. There has been many secret sharing scheme since Adi Shamir and George Blakley independently invented in 1979. The basic idea is that you distributed portions of a secret to n people in the group and you need the input of at least t (<= n) people in order to construct the secret.
In programming terms, it's simply logical && condition:
if (condition1 && condition2..&& conditionn) { //perform }
6. Least Privilege:
Some people confuse this with 5, but this is very different from separation of privilege principle. This simply says that every program and every user should only be granted minimum amount of privilege they require to carry out the tasks. The rationale behind this is it will limit the damage caused to a system in case there's an error. For example, if a reader is only allowed to read files, it would be violating this principle to give both read and write access to those files.
All the access control models/MLS/MAC/DAC/RBAC out there (For example, Biba integrity model 1977, Bell-LaPadula model 1983 and Clark-Wilson integrity model 1987 etc.) try to archive this security principle.
I find some similarity between breaking a program into small pieces and this principle; this allows you call only the required tiny functions to get the work done without invoking unnecessary instructions.
7. Least Common Mechanism
The key idea behind this is to minimize shared resources such as files and variables. Shared resources are a potential information path which could compromise security. Sharing resources is a key aspect of Web 2.0; while we achieve perfect security with isolation, we gain little benefit out of it's intended use. So, we need to share information while preventing unintended information paths/leakages.
You can think of DoS attacks as a result of sharing your resources over the Internet/WAN/LAN with malicious parties. While we cannot isolate the web resources, there are already many mechanisms including using proxies, to restrict such malicious uses and promote only the intended use.
In programming terms, you're better off using local variables as much as possible instead of global variables; it is not only easier to maintain but also less likely to be hacked.
8. Psychological Acceptability
You cannot make a system secure at the cost of making it difficult to use! Ease of use is a key factor in whatever system we build. We see a lot of security mechanisms from policy enforcements to access control, but little they focus on the usability aspects. Further, it is more like to make mistakes if the system is intuitive to use.
If you are designing API's, make sure you think about the user first!
No comments:
Post a Comment