Monday, January 28, 2008
Go Purdue!
It was a memorable Saturday evening for Purdue fans as our men's basketball team beat (upset) Wisconsin (Rank 11) 60 to 56. According to the records, it's been a quite a while since Mackey Arena had a similar atmosphere where we beat then No. 4 IU. Some students have lined up at the arena to get premium seats previous day night!
Sunday, January 27, 2008
8 Design Principles of Information Protection [Part 2]
As promised, the second part of "8 Design Principles of Information Protection! (part 1 is here) Last time I looked at Economy o f mechanisms, Fail-safe default, complete design and open design principles. Today I'm walking you though the remaining 4 principles.
5. Separation of Privilege
We break up a single privilege among multiple components or people such that there should be a collective agreement between those components or people to perform the task controlled by the privilege. Multi-factor authentication is a good example for this; for example, in addition to some biometric authentication (what you are) such as finger print, iris, etc. the system may require you to provide an ID (what you have) to gain access to it. A good day-to-day example is a multiple signature check. As a side note, operating systems like OpenBSD implemented separation of privilege in order to step up security of the system.
Secure multi-party computations in cryptography (first introduced by Andrew C. Yao in his 1982 paper; the millionaire problem) is related to this concept in that without the participation of all the participants you cannot perform the required computation; this scheme has the added benefit that no participant learn more about the secrets that other participants have.
Secure secret sharing (it is different from shared secret like symmetric key) in cryptography is another good example. There has been many secret sharing scheme since Adi Shamir and George Blakley independently invented in 1979. The basic idea is that you distributed portions of a secret to n people in the group and you need the input of at least t (<= n) people in order to construct the secret.
In programming terms, it's simply logical && condition:
if (condition1 && condition2..&& conditionn) { //perform }
6. Least Privilege:
Some people confuse this with 5, but this is very different from separation of privilege principle. This simply says that every program and every user should only be granted minimum amount of privilege they require to carry out the tasks. The rationale behind this is it will limit the damage caused to a system in case there's an error. For example, if a reader is only allowed to read files, it would be violating this principle to give both read and write access to those files.
All the access control models/MLS/MAC/DAC/RBAC out there (For example, Biba integrity model 1977, Bell-LaPadula model 1983 and Clark-Wilson integrity model 1987 etc.) try to archive this security principle.
I find some similarity between breaking a program into small pieces and this principle; this allows you call only the required tiny functions to get the work done without invoking unnecessary instructions.
7. Least Common Mechanism
The key idea behind this is to minimize shared resources such as files and variables. Shared resources are a potential information path which could compromise security. Sharing resources is a key aspect of Web 2.0; while we achieve perfect security with isolation, we gain little benefit out of it's intended use. So, we need to share information while preventing unintended information paths/leakages.
You can think of DoS attacks as a result of sharing your resources over the Internet/WAN/LAN with malicious parties. While we cannot isolate the web resources, there are already many mechanisms including using proxies, to restrict such malicious uses and promote only the intended use.
In programming terms, you're better off using local variables as much as possible instead of global variables; it is not only easier to maintain but also less likely to be hacked.
8. Psychological Acceptability
You cannot make a system secure at the cost of making it difficult to use! Ease of use is a key factor in whatever system we build. We see a lot of security mechanisms from policy enforcements to access control, but little they focus on the usability aspects. Further, it is more like to make mistakes if the system is intuitive to use.
If you are designing API's, make sure you think about the user first!
5. Separation of Privilege
We break up a single privilege among multiple components or people such that there should be a collective agreement between those components or people to perform the task controlled by the privilege. Multi-factor authentication is a good example for this; for example, in addition to some biometric authentication (what you are) such as finger print, iris, etc. the system may require you to provide an ID (what you have) to gain access to it. A good day-to-day example is a multiple signature check. As a side note, operating systems like OpenBSD implemented separation of privilege in order to step up security of the system.
Secure multi-party computations in cryptography (first introduced by Andrew C. Yao in his 1982 paper; the millionaire problem) is related to this concept in that without the participation of all the participants you cannot perform the required computation; this scheme has the added benefit that no participant learn more about the secrets that other participants have.
Secure secret sharing (it is different from shared secret like symmetric key) in cryptography is another good example. There has been many secret sharing scheme since Adi Shamir and George Blakley independently invented in 1979. The basic idea is that you distributed portions of a secret to n people in the group and you need the input of at least t (<= n) people in order to construct the secret.
In programming terms, it's simply logical && condition:
if (condition1 && condition2..&& conditionn) { //perform }
6. Least Privilege:
Some people confuse this with 5, but this is very different from separation of privilege principle. This simply says that every program and every user should only be granted minimum amount of privilege they require to carry out the tasks. The rationale behind this is it will limit the damage caused to a system in case there's an error. For example, if a reader is only allowed to read files, it would be violating this principle to give both read and write access to those files.
All the access control models/MLS/MAC/DAC/RBAC out there (For example, Biba integrity model 1977, Bell-LaPadula model 1983 and Clark-Wilson integrity model 1987 etc.) try to archive this security principle.
I find some similarity between breaking a program into small pieces and this principle; this allows you call only the required tiny functions to get the work done without invoking unnecessary instructions.
7. Least Common Mechanism
The key idea behind this is to minimize shared resources such as files and variables. Shared resources are a potential information path which could compromise security. Sharing resources is a key aspect of Web 2.0; while we achieve perfect security with isolation, we gain little benefit out of it's intended use. So, we need to share information while preventing unintended information paths/leakages.
You can think of DoS attacks as a result of sharing your resources over the Internet/WAN/LAN with malicious parties. While we cannot isolate the web resources, there are already many mechanisms including using proxies, to restrict such malicious uses and promote only the intended use.
In programming terms, you're better off using local variables as much as possible instead of global variables; it is not only easier to maintain but also less likely to be hacked.
8. Psychological Acceptability
You cannot make a system secure at the cost of making it difficult to use! Ease of use is a key factor in whatever system we build. We see a lot of security mechanisms from policy enforcements to access control, but little they focus on the usability aspects. Further, it is more like to make mistakes if the system is intuitive to use.
If you are designing API's, make sure you think about the user first!
Saturday, January 26, 2008
As You May Think
I recently read Vannevar Bush's speculative classic article "As You May Think" published in 1945 when the bitter WWII finally ended. Bush always had this vision "to grow in the wisdom of race experience, rather than perishing in conflict". The article beautifully put his vision into words.
Having considered the plight of a researcher deluged with inaccessible information, he proposed the memex, a machine to rapidly access, and allow random links between, pieces of information. The idea was to link books and films and automatically follow cross-references from one work to another. Doesn't it sound familiar with what Tim Berners-Lee invented in 1989? Although the way how Bush viewed hyper-links quite different from what we have today, I am sure it must have surely inspired the concept of WWW and hyper-links. He went on to talk about sharing information with colleagues and wide publication of information; the whole point of web 2.0. (His ideas on how to achieve his thoughts may not be perfect considering the technological advances we had since he put forward these ideas. However, we must admit that his thinking was something similar to many famous speculations in science and technology from computation to communication).
Here's the link to the complete article published in Atlantic Monthly.
Having considered the plight of a researcher deluged with inaccessible information, he proposed the memex, a machine to rapidly access, and allow random links between, pieces of information. The idea was to link books and films and automatically follow cross-references from one work to another. Doesn't it sound familiar with what Tim Berners-Lee invented in 1989? Although the way how Bush viewed hyper-links quite different from what we have today, I am sure it must have surely inspired the concept of WWW and hyper-links. He went on to talk about sharing information with colleagues and wide publication of information; the whole point of web 2.0. (His ideas on how to achieve his thoughts may not be perfect considering the technological advances we had since he put forward these ideas. However, we must admit that his thinking was something similar to many famous speculations in science and technology from computation to communication).
Here's the link to the complete article published in Atlantic Monthly.
A man/woman on mars?
Not sure if this is true:
http://www.space.com/scienceastronomy/080124-bad-mystery-mars.html
http://www.space.com/scienceastronomy/080124-bad-mystery-mars.html
Sunday, January 13, 2008
Expandable Grids for Visusalizing and Authoring Security/Privacy Policies
Last week I attended a talk by Rob Reeder, a PhD student at CMU, on the $subject. The basic idea behind their (Lorrie Faith Cranor et al.) work was to come up with more intuitive and easier to use interfaces to represent policies for access control (e.g. OS file permissions), privacy preferences (e.g. P3P), etc. Although I am neither an expert in HCI aspects nor working in this area, I found the material very interesting; at the end of the day, success or failure of whatever we invent is judged by the users. Simplicity is the key. A French writer and aircraft designer once said that "A designer knows he has arrived at perfection not when there is no longer anything to add, but when there is no longer anything to take away".
Getting back to the problem they were trying to solve, all existing policy representations are basically one-dimensional (take windows file permissions and natural language privacy policies for example) and this may be a potential usability problem. Their solution was to introduce two-dimensional expandable grids to represent the same. The surveys they had conducted showed mixed results.
Here are some of the examples he demonstrated:
An application for authoring Windows XP permissions.
Rows represent files (expanding folders) and columns users (expanding user groups). A cell represents the kind of permissions a user has on a file, a user has on a folder, a user group has on a file or a user group has on a folder (4 possibilities). Their surveys conducted on the above vs. Windows XP permissions representation have shown positive results.
Another example was to represent P3P policies. Usually, web sites specify P3P policies in natural language (English) which is quite difficult to follow. Here's a snapshot of their approach.
Their preliminary online surveys of comparing the usability of this approach vs. sequential natural language approach have not produced promising results; their initial work was as bad as the natural language approach or worse. Some of the reasons, including the ones that authors have suggested, may have contributed to this:
1. A lot of information has been condensed into a short space
2. Introduction of short terms for descriptions in natural language may not be intuitive to users who are that familiar with these types of policies. (Loss of information during conversion)
3. A user need to remember many types of possible symbols that can be applied to policies which are not standardized notations; users simply don't want to go that extra mile to memorize those proprietary symbols just to protect privacy. A standardized set of possible symbols may do some justification.
4. Also the symbols used are quite similar to one another and does not reflect the operations they are supposed to represent.
5. The color scheme used is not appealing to me either.
(From a user's point of view, I hope the above information is useful to come up with a better user interface for the grid if someone in the group happens to read my blog!)
I also have some issues related to scalability and other things:
How does their grid scale when there are many X and Y values?
What can be done to improve the grid when it is sparse?
Can be loaded incrementally to improve performance?
How do we represent related policies?
How do we systematically convert one-dimensional representation into a grid without loss of information?
Although their initial work has mixed success, I think that with improvements there can be a positive impact on not only in OS permissions, policy authoring but also other domains such as representing business rules, firewall rules, web services policies, etc.
Getting back to the problem they were trying to solve, all existing policy representations are basically one-dimensional (take windows file permissions and natural language privacy policies for example) and this may be a potential usability problem. Their solution was to introduce two-dimensional expandable grids to represent the same. The surveys they had conducted showed mixed results.
Here are some of the examples he demonstrated:
An application for authoring Windows XP permissions.
Rows represent files (expanding folders) and columns users (expanding user groups). A cell represents the kind of permissions a user has on a file, a user has on a folder, a user group has on a file or a user group has on a folder (4 possibilities). Their surveys conducted on the above vs. Windows XP permissions representation have shown positive results.
Another example was to represent P3P policies. Usually, web sites specify P3P policies in natural language (English) which is quite difficult to follow. Here's a snapshot of their approach.
Their preliminary online surveys of comparing the usability of this approach vs. sequential natural language approach have not produced promising results; their initial work was as bad as the natural language approach or worse. Some of the reasons, including the ones that authors have suggested, may have contributed to this:
1. A lot of information has been condensed into a short space
2. Introduction of short terms for descriptions in natural language may not be intuitive to users who are that familiar with these types of policies. (Loss of information during conversion)
3. A user need to remember many types of possible symbols that can be applied to policies which are not standardized notations; users simply don't want to go that extra mile to memorize those proprietary symbols just to protect privacy. A standardized set of possible symbols may do some justification.
4. Also the symbols used are quite similar to one another and does not reflect the operations they are supposed to represent.
5. The color scheme used is not appealing to me either.
(From a user's point of view, I hope the above information is useful to come up with a better user interface for the grid if someone in the group happens to read my blog!)
I also have some issues related to scalability and other things:
How does their grid scale when there are many X and Y values?
What can be done to improve the grid when it is sparse?
Can be loaded incrementally to improve performance?
How do we represent related policies?
How do we systematically convert one-dimensional representation into a grid without loss of information?
Although their initial work has mixed success, I think that with improvements there can be a positive impact on not only in OS permissions, policy authoring but also other domains such as representing business rules, firewall rules, web services policies, etc.
Saturday, January 12, 2008
8 Design Principles of Information Protection [Part 1]
I recently read again the infamous paper, The Protection of Information in Computing Systems, Proceedings of the IEEE, 63(9):1278-1308, 1975 by J.H. Saltzer and M.D. Schroeder. Although this paper is more than 3 decades old, it is interesting to see that the eight design principles mentioned in this paper are still valid in the context of current secure system design and implementation. Before I provide a brief overview of these principles, let me tell you why security is a hard thing to achieve; if you want your system to be secure, you need to prevent all unauthorized use/access, read or write of the information in the system. This negative requirement is hard to prove as it is difficult, if not impossible, to enumerate all possible threats to the system. Given this fact, it is prudent to design and implement systems by incorporating lessons learned from the past, minimizing the number of security flaws. Today, I am looking at the first 4 principles adding my own thoughts.
1. Economy of Mechanism - Keep the design as simple and small as possible. When it is small, we can have more confident that it meets security requirements by various inspection techniques. Further, when your design is simple, the number of policies in your system becomes small, hence less complicated to manage (only a few policy inconsistencies will you have) and use them. Complexity of policies is a major drawback in current systems including SELinux. Coming up with policy mechanisms that are simple yet cover all aspects is a current research topic.
2. Fail-safe Default - Base access decisions on permission rather than exclusion. A design mistake based on explicit permission will deny access to the subject when it is allowed. This is a safe situation and can easily be spotted and corrected. On the other hand, a design mistake based on explicit exclusion, will allow access to the subject when it is not allowed. This is an unsafe situation and may go unnoticed compromising security requirements. Putting it in pseudo code:
correct: if (condition) { allow } else { deny }
incorrect: if (!condition) { deny } else { allow }
The bottom line is this approach limits, if not eliminate, unauthorized use/access.
3. Complete mediation - Check for permission for all objects a subject wants to access. For example, if a subject tries to open a file, the system should check if the subject has read permission on the file. Due to performance reasons, Linux does not completely follow this rule. When you read/write a file, permissions are checked the first time and they are not checked thereafter. SELinux introduces enforcing complete mediation principle based on policy configurations, which is a strength of it. However, the policy configuration in SELinux is somewhat complex moving us away from the first design principle, the Economy of Mechanisms.
In programming terms, we should validate the input arguments before doing any processing with them.
fun(args) { if (args not valid) { reject } else { process request } }
4. Open Design - We cannot achieve security by obscurity! This is where the strength of open source shines. An open design allows to have wide-spread public review/scrutiny of the system allowing users to evaluate the adequacy of the security mechanism. While the mechanism is public, we depend on the secrecy of a few changeable parameters like passwords and keys. Take an encryption function. Algorithm is there for you to take, keys are the only secrets.
From a programming perspective, never hard-code any secrets in to your code!
I will text about the last 4 principles in my next blog entry.
5. Separation of Privileges
6. Least Privilege
7. Least Common Mechanism
8. Psychological Acceptability
1. Economy of Mechanism - Keep the design as simple and small as possible. When it is small, we can have more confident that it meets security requirements by various inspection techniques. Further, when your design is simple, the number of policies in your system becomes small, hence less complicated to manage (only a few policy inconsistencies will you have) and use them. Complexity of policies is a major drawback in current systems including SELinux. Coming up with policy mechanisms that are simple yet cover all aspects is a current research topic.
2. Fail-safe Default - Base access decisions on permission rather than exclusion. A design mistake based on explicit permission will deny access to the subject when it is allowed. This is a safe situation and can easily be spotted and corrected. On the other hand, a design mistake based on explicit exclusion, will allow access to the subject when it is not allowed. This is an unsafe situation and may go unnoticed compromising security requirements. Putting it in pseudo code:
correct: if (condition) { allow } else { deny }
incorrect: if (!condition) { deny } else { allow }
The bottom line is this approach limits, if not eliminate, unauthorized use/access.
3. Complete mediation - Check for permission for all objects a subject wants to access. For example, if a subject tries to open a file, the system should check if the subject has read permission on the file. Due to performance reasons, Linux does not completely follow this rule. When you read/write a file, permissions are checked the first time and they are not checked thereafter. SELinux introduces enforcing complete mediation principle based on policy configurations, which is a strength of it. However, the policy configuration in SELinux is somewhat complex moving us away from the first design principle, the Economy of Mechanisms.
In programming terms, we should validate the input arguments before doing any processing with them.
fun(args) { if (args not valid) { reject } else { process request } }
4. Open Design - We cannot achieve security by obscurity! This is where the strength of open source shines. An open design allows to have wide-spread public review/scrutiny of the system allowing users to evaluate the adequacy of the security mechanism. While the mechanism is public, we depend on the secrecy of a few changeable parameters like passwords and keys. Take an encryption function. Algorithm is there for you to take, keys are the only secrets.
From a programming perspective, never hard-code any secrets in to your code!
I will text about the last 4 principles in my next blog entry.
5. Separation of Privileges
6. Least Privilege
7. Least Common Mechanism
8. Psychological Acceptability
Subscribe to:
Posts (Atom)