Saturday, January 12, 2008

8 Design Principles of Information Protection [Part 1]

I recently read again the infamous paper, The Protection of Information in Computing Systems, Proceedings of the IEEE, 63(9):1278-1308, 1975 by J.H. Saltzer and M.D. Schroeder. Although this paper is more than 3 decades old, it is interesting to see that the eight design principles mentioned in this paper are still valid in the context of current secure system design and implementation. Before I provide a brief overview of these principles, let me tell you why security is a hard thing to achieve; if you want your system to be secure, you need to prevent all unauthorized use/access, read or write of the information in the system. This negative requirement is hard to prove as it is difficult, if not impossible, to enumerate all possible threats to the system. Given this fact, it is prudent to design and implement systems by incorporating lessons learned from the past, minimizing the number of security flaws. Today, I am looking at the first 4 principles adding my own thoughts.

1. Economy of Mechanism - Keep the design as simple and small as possible. When it is small, we can have more confident that it meets security requirements by various inspection techniques. Further, when your design is simple, the number of policies in your system becomes small, hence less complicated to manage (only a few policy inconsistencies will you have) and use them. Complexity of policies is a major drawback in current systems including SELinux. Coming up with policy mechanisms that are simple yet cover all aspects is a current research topic.

2. Fail-safe Default - Base access decisions on permission rather than exclusion. A design mistake based on explicit permission will deny access to the subject when it is allowed. This is a safe situation and can easily be spotted and corrected. On the other hand, a design mistake based on explicit exclusion, will allow access to the subject when it is not allowed. This is an unsafe situation and may go unnoticed compromising security requirements. Putting it in pseudo code:

correct: if (condition) { allow } else { deny }
incorrect: if (!condition) { deny } else { allow }

The bottom line is this approach limits, if not eliminate, unauthorized use/access.

3. Complete mediation - Check for permission for all objects a subject wants to access. For example, if a subject tries to open a file, the system should check if the subject has read permission on the file. Due to performance reasons, Linux does not completely follow this rule. When you read/write a file, permissions are checked the first time and they are not checked thereafter. SELinux introduces enforcing complete mediation principle based on policy configurations, which is a strength of it. However, the policy configuration in SELinux is somewhat complex moving us away from the first design principle, the Economy of Mechanisms.

In programming terms, we should validate the input arguments before doing any processing with them.

fun(args) { if (args not valid) { reject } else { process request } }

4. Open Design - We cannot achieve security by obscurity! This is where the strength of open source shines. An open design allows to have wide-spread public review/scrutiny of the system allowing users to evaluate the adequacy of the security mechanism. While the mechanism is public, we depend on the secrecy of a few changeable parameters like passwords and keys. Take an encryption function. Algorithm is there for you to take, keys are the only secrets.

From a programming perspective, never hard-code any secrets in to your code!

I will text about the last 4 principles in my next blog entry.
5. Separation of Privileges
6. Least Privilege
7. Least Common Mechanism
8. Psychological Acceptability

No comments: