Thursday, December 27, 2007

A good resource on statistical data mining

Lately I got myself interested in statistical data mining. I think there's a huge potential in this area which combines mathematics and algorithms. We have exabytes of information in various digital forms and the amount of information doubles roughly every 3 years. This demands us to find new models and algorithms to efficiently and accurately extract useful/relevant information from this vast repository. This is where data mining techniques come into play. I found the set of lecture notes by Andrew Moore to be very useful to get the basic concepts right. I like the simplicity in which he unfolds each lesson.

Wednesday, December 26, 2007

McAfee Annual Criminology Report is Out

Not to mention that cybercrimes are on the increase and no one, including you and me, is safe from these mounting threats. McAfee provides an in-depth analysis of the $subject in their annual report.

Saturday, December 1, 2007

Overcoming the social stereotypes attached to the disabled

While I was browsing through local news papers, this article caught my attention. It's about people who are not fortunate enough to have gifted with a normal life like me or possibly you, yet they strive to create a better world not only for them but also for others suffering from poverty or disabilities.

It was about Sri kanth, born with dwarfism (a condition that hiders normal mobility and dexterity), who has overcome all odds to be successful in the IT industry.

Some links: : Sri Kanth and his siblings (who were also born with the same condition) started the Koslanda nanasala : The group they created for hill-country differently-abled people (The heading of the web site caught my eyes: Some need to succeed in order to belong...some need to belong in order to succeed) : Sri Kanth initiated and leads the Uva Province Telecentre Family : Sri Kanth's latest venture.

It's time we realize the disabled are differently-abled people!

(Locations and some words used here are related to Sri Lanka - my sweet home)

Thursday, November 29, 2007

Beware of booby-trapped sites

This being the festive season, hackers try to draw you to malicious sites by fooling search indexing mechanisms of popular search engines. BBC has a recent incident about this. It's a good idea to apply all available security patches to whatever browser you use to thwart these attacks!

Sunday, October 7, 2007

Chance Rules in Everyday Life

When you come to think about probability, it plays a larger role than most of us realize. Winning a lottery, birthday attacks, filtering spams, identifying intrusions, authenticating using biometrics and coincidences are some of the immediate ones that come into my numerically-tuned ;) mind. Sometime our intuitive feeling make a guess about something happening but we often find that mathematical implications are quite different from what we have thought of. Wikipedia has a nice example on false-positives which provides evidence for the above claim.

I found this book : Understanding Probability: Chance Rules in Everyday Life by Henk C. Tijms really interesting for two reasons. First, I like the catchy title. Second is a more serious one: throughout the book, the author beautifully shows how probability is an intricate part of our life.

For those who want to stretch their minds, here's the infamous "The Monte Hall Problem" if you have not come across this one already.

Suppose you’re on a game show, and you’re given the choice of three doors:

Behind one door is a car; there's nothing behind the other two doors. You pick a door, say number 3, and the host, who knows what’s behind the doors, opens another door, say number 2, which is empty. Then you are given the opportunity to select the remaining door (number 1 in this case) or stick to your fist choice.

Does it matter if you change your mind and pick the first door? Surprisingly the answer is yes. Intuitively you, like me, may believe you have the probability of 0.5 (1 in 2 chances) of winning, but it turns out that it is not the case.

What is the probability of winning the car if you stick with your first choice? 1/3 (This should be simple enough to understand, as it is the choice of one in three)

What is the probability of winning the car if you change your mind and choose the first door? 2/3 (You only have two choices, stick with the first one, 1/3, or change your mind 1-1/3)

You see, second thoughts are not a bad thing given certain conditions change during the cause!

Friday, September 28, 2007

Adoption of Biometrics

I've just checked in Google Trends how much people search on biometrics compared to cryptography in United States last few years.

If we assume that the search volume implies the interest of adoption, the center of gravity ;) is moving towards biometrics! So, for everyone out there doing biometrics research, this is a good sign.

Biometrics vs. Cryptography

Today I attended a talk by Cathy Tilton, VP of Standard and Technology at Daon, on the $subject. It was pretty interesting.

The current security model for the verification of identity is based on using a token, tied to and thereby representing an individual. This token may be
- a password or shared secret (something you know)
- an identity card (something you have) or
- a biometric (something you are)

There is a growing interest in the use of biometrics which is evident from identification requirements at airports, embassies, new access control systems etc.

People in the field of cryptography (like me!) tend to compare privacy, security issues of biometrics with respect to cryptographic standards (mind-set if you will). However, this may not hold when you look at the way each is used.

In cryptographic authentication, users are provided with credentials by the service provider. It should be kept secret at all times. The objective is to verify the possession of such credentials when accessing the service. On the other hand, biometrics authentication does not rely on the secrecy of the biometrics used (finger prints, face, iris, etc). Rather, it relies on the integrity and authenticity of the data source. It should not be mistaken that privacy of biometric information is equally important - a compromised biometric database could be very harmful to public safety and national security. Further, biometrics alone for authentication have another potential drawback; how do we revoke a biometric template (digital form of biometric) when compromised?

The way to go forward is to combine biometrics with cryptography. One of the inherent issue (which is a current hot research topic) with biometrics is confidently associate a template with a user as different templates captured from the same user hardly match. It is always a probabilistic measure that leads to identification. I have seen some papers where people try to come up with techniques to associate a single value with user's biometrics measurements.

Tuesday, June 12, 2007

Free services aren't really free

Last year I switched from Yahoo mail to Gmail because of its superior features. However, recently Google let me down for a couple of days by denying access to my Gmail account due to unavailability of service. Since it's free we have little way of getting any sort of customer service. It goes well with the line "you get what you pay for". And I cannot complaint much about it as it is still in beta. Once I got access to my account again, first thing I did was to read the terms of use. Among other things, it is written in black and white, "the Service is provided on an AS IS and AS AVAILABLE basis. Google disclaims all responsibility and liability for the availability, timeliness, security or reliability of the Service."

This is the situation with pretty much every free service irrespective of who provides them. What we need to realize is that free services, let it be email, web hosting or something else, are not the solution to all the problems and needs we have. This is particularly true in a business environment. Before you go ahead with these free services, do take some time to assess the cost of using them:

How important are those free services you rely on?
How does the unavailability of services affect you and your customers?
Cost of unavailability vs. Cost of paid service?
What alternative sources are available to augment free services you are using?
How does the response time affect you?

It's always healthy not to rely on free services for your important and primary services. I am not completely ruling out the use of services like free mails - they are great for certain things but not for everything. In fact, I will continue to keep all developer/user mailing lists I have subscribed to under Gmail.

I hope someone will come up with a really cool mail service which leverages on existing free mail services like Gmail, Yahoomail and Hotmail. A naive example would be, we get an email address and the actual email is replicated in the three major mail provider accounts which will dramatically reduce the cost of relying on one free email.
(Gmail does allow to forward a copy to some other account while keeping a copy, but I am thinking of something that will work even in the face of unavailability and I am not considering the possibility of manually setting up a forwarding list through a paid service)

Sunday, June 10, 2007

Wireless Power

While I was reading the WSO2 blog feed, I stumbled upon this post. Thought of investigating further of the current status and the physics behind the $subject.

Physicists at MIT headed by Marin Soljacic have successfully demonstrated how to wirelessly illuminate an unplugged light bulb from seven feet away [1, 2]. There work is based on magnetically coupled resonance. Just like in acoustics, two objects with same resonant frequency interact with one another strongly while weakly with off-resonant objects. Their first paper on "Efficient wireless non-radiative mid-range energy transfer" describes the theory behind method. The abstract goes as follows:

We investigate whether, and to what extent, the physical phenomenon of long-lifetime resonant electromagnetic states with localized slowly-evanescent field patterns can be used to transfer energy efficiently over non-negligible distances, even in the presence of extraneous environmental objects. Via detailed theoretical and numerical analyses of typical real-world model-situations and realistic material parameters, we establish that such a non-radiative scheme can lead to “strong coupling” between two medium-range distant such states and thus could indeed be practical for efficient medium-range wireless energy transfer.

Researchers have been experimenting on this $subject for centuries. Electromagnetic radiation is one such method, but they do have their own limitations. For example, if we use radio waves, we may end up wasting a lot of energy as it spreads in all directions. We may overcome this limitation by using lasters, but the problem with lasters is that you cannot have obstacles in between the energy source and the device that you want to power.

This work is different from the wireless charging technologies based on radio, induction or resonance. Apple, Motorola, and many other companies are already into this. Companies like SplashPower, WildCharge, etc. have already come up with the technology that can charge multiple devices at once by simply placing the gadgets on a mousepad-like surface. In the future, you'll just dump your devices on a pad and in no time they get charged!

Tribute to the champion behind the ICT industry in Sri Lanka

It was sad to hear the demise of the most valued person in the ICT industry in Sri Lanka, Prof. V.K. Samaranayke last week [1, 2]. He was instrumental in taking IT to the rural areas of the country. Among numerous initiatives in his 4 decade long commitment to IT industry in Sri Lanka, we will never forget his immense contributions to computerize national election, take IT to rural villages through programs like nanasla, standardize Sinhala character set, enhance Internet connectivity in Sri Lankan universities, introduce the BIT (Bachelor of Information Technology of Colombo University) external degree program. Many a awards he received during this period are good evidences for his contributions.

Before the introduction of BIT, only a few students from high schools get the opportunity to study Computer Science/Engineeing/IT in state universities. More than 99% of student population is denied of getting into this field due to limited resources. I, being a lecturer for this external degree program in 2005 and 2006, have met many talented students because of this initiative.

You can leave your e-condolence message here.

Thursday, June 7, 2007

Access to Information to Everyone

I had the opportunity to listen to the talk “Technology for the Global Good” by Cliff Missen, the founder of WiderNet project, at the Fulbright Science and Technology seminar held in San Jose, CA.


Schools and universities in most of the developing countries have limited or no internet connection. This is especially true in African and South American developing countries. In Sri Lanka, an Asian country, most universities provide free internet access to their students. However, the bandwidth is much more less than that in developed countries. Further, most schools don’t have access to vast amount of information available in the Internet.


I am impressed by the eGranary project which aims to bring information to African students free of charge. Since these countries do not possess enough bandwidth to directly connect to the Internet, the basic idea is to create an offline version of educational web sites and store them in a persistent storage. You may think this is not useful, but the point is something is better than nothing. If these countries don’t have the infrastructure to deliver the high bandwidth demanding content, the next best option is to replicate the information and make it available through other means. How often do we think of the perfect solution and give up when we cannot overcome the obstacles? This is a good lesson for all such thinkers. It is not the prefect solution that matters, but the impact you make by implementing a feasible solution.

They use the open source HTTrack software to replicate educational sites in the Internet. One obvious question, with information in the web changing constantly, how do we keep the offline copy up to date? The key idea is to use a protocol that updates the offline copy based on the tentative update frequency of actual web sites. We need a similar protocol to do the other way round; that is, reflecting the changes done to offline copy in actual web sites. You can contribute to this project by allowing eGranary to use web content you own. Currently the update process is manual, but I think it is not far way goal to fully automate update process both ways.

It’d be quite useless to have gigabytes of information offline without a search facility. I am really impressed of the search engine that is provided with eGranary digital library – it’s simply like Google search result.

IMHO, this is a really good way to bring information to high school students in countries like ours. One further step would be to make different offline copies different sets of audiences.

There are other benefits of using the above idea as well.

As we all know, the Internet is full of information, most of which is useless for a particular group of audience. By filtering out only the useful information, we not only make information search quick, but also build a network of trusted web sites.

Parents are increasingly concerned about protecting their children from harmful content in the web while allowing them to access information.This'd be a good way to ensure we meet both objectives.

Wednesday, May 9, 2007

Apache Axis2/C 1.0 is released!

After about one and a half years of hard work by many committers, Apache Axis2/C 1.0 was released a few days ago. I downloaded the windows version and it works like a charm. Congratulations to the Axis2/C team!

Tuesday, April 17, 2007

Midwest Security Workshop

The third Midwest Security Workshop is to be held on the 21st of this month. If you are somewhere close to Purdue and interested in information security, come join us!

Monday, April 16, 2007

Talk: A Quant Looks at the Future

Dan Geer (Verdasys) was the closing speaker of the 8th CERIAS Security Symposium. I really enjoyed his talk and the loud applause at the end confirmed that the majority did as well. It was truly amazing how he had synthesized his thoughts through a series of graphs. He's a real quant(itative guy) !

The crust of his talk was that we need to protect data not the infrastructure where the data is transmitted. Those who are in possession of data will rule the world!

Based on the NCMS data, he beautifully talked about the connection between the degree of collaboration and anticipation/mitigation costs.

And another fact...phishing is a profession!

Sunday, April 15, 2007

Ugly side of blogging

Creating Passionate Users is one of the blogs that I enjoy reading. Kathy (aka a master mind behind head first series) is the primary blogger. Now she has temporarily stopped blogging after receiving various threats. I hope she'll not give in to those few people who cannot bear her success.

Randomly I selected two posts I liked:

The first one..
How to become an expert.. As she puts it so eloquently, "The only thing standing between you-as-amateur and you-as-expert is dedication". The following diagram shows it all.

(Courtesy: Creating Passionate Users)

The second one
Code like a girl... I am talking about true beauty here and it cannot be precisely defined as goes with the famous line "beauty lies in the eyes of the beholder". As a person who likes numbers a lot, I sometime come across beautiful mathematical equations. I like Kathy's idea of associating coding with beauty. The code we write should not only act right but look right ;-)

Wednesday, April 11, 2007

Talk: Dumb Ideas in Computer Security

(The last) Talk #3 for the day: If I rated this talk by 1 to 10 scale with 10 being the best, I'd give all ten points without holding anything back! It goes with the line "best things come last" (actually I am rather short of words to express it, maybe there's a related line, anyway, this is how we say it in our mother tongue "hoda hoda sellum eliwena jameta" :-)

The presenter was Dr Charles P Pfleeger, a consultant, speaker, educator and author on computer and information system security. It was done as part of CERIAS seminar.

He talked about 6 dumbest ideas in computer security. These are additions to what Markus J. Ranum had to say about the $subject. Here is the summary of what I remember out of his talk.

#1 We'll do security later.
The idea is you cannot retrofit security. You have to think about security up front.

#2 We'll do privacy later.
The idea is we should have fair information practices. You have to tell user what you are going to do with their data.

#3 Encryption cure all.
Encryption is over-rated. You need to think about the period where the data is in the plain. For example, even if you have end-to-end encryption where you don't have data in the clean in transit, you still have data in the clear at the start of the transaction and at the end of the transaction. Other issues, which are less sever compared to the one I mentioned above, are key management and implementation/algorithm weakness.

#4 You have either perfect security or nothing
Putting in his own words, providing security is not like riding a tight rope where you are either on the rope or not. Security is a continuum. It's not practical, even unnecessary, to seek to provide more security than required. You have to quantify the risk you have and you need to decide how much risk you are willing to take. In other words, we need to keep the security requirements sensible.

#5 Separation is Unnecessary
The idea is controlled sharing requires separation. Just like, you have to draw a line between spectators from players, you have to think about different levels of security. This idea is not new; in fact, it was introduced in operating systems way back in 1970's..we went back into old habits in 1980' now we are again getting back to good old principles.

#6 It's easy - we can do security ourselves
This idea is not very clear to me yet. Have to read some of the references he cited to get a real understanding of it. However, the idea is the program complexity inhibits security; so it is more difficult to enforce security than you may think with the complexity of the code.

Some post-talk thoughts:
Most of the papers I have seen in security areas try to come up with security solutions (complete security solutions, if you will) that do not consider about some of the misconceptions mentioned above (for example, risk vs. gain analysis).
Another thing that I ponder about is, do we, in the security field, actually put enough weight into understanding who is going to use our solutions, how they are different in interpreting solutions we provide, etc.

That's it from me about the talk!

I've just looked at the world cup super 8, cricket match score between England and Bangladesh. England has managed to limp pass Bangladesh. Had Tigers (Bangladesh team) put a few more runs on the board, they could have caused the third (first, second) famous upset in the world cup. Tomorrow we, lions (Sri Lankan team) take on black hats (New Zealand team). I think our team is in a good shape to seal the victory.

Now it's time to get back to other fun stuff. I need to do some finishing touches on the operating system paging lab which is due tomorrow 11.59.59 pm ;-) and get into the fast track of preparing for the two quals I am taking in two weeks time. And also I need to prepare a report for the independent study I am doing. I will have to wait to reveal more about that work until the paper gets accepted. So stay tuned!

There are bunch of papers related to the $subject. Maybe I'll put out a list with my interpretation when I get some time.

Talk: Browser Security: A New Research Territory

Talk #2 for the day: By Dr. Shuo Chen, Microsoft Research. His talk was centered around two papers (technical reports if you will) published:

1. A Systematic Approach to Uncover Security Flaws in GUI Logic
2. Light-Weight Transparent Defense Against Browser Cross-Frame Attacks Using Script Accenting

As the $subject and $titles imply, their goal is to improve security in web browsers: though he presented the concepts citing examples using IE, most of the issues prevail in other browsers as well.

One thing that fascinated me is the systematic approach they took to reason about the security. In more specific terms, they initially have clearly defined the system invariants and made sure that the invariants are maintained through out. (This is what we call formal verification.)

He beautifully explained how (smart) hackers have exploited logic bugs in browser interfaces to launch phishing attacks (some of them are very subtle) and went out to talk about how to uncover systematically and fix them.

The next area he talked about is how do deal with browser cross-domain attacks. He showed how hackers have exploited, among other things, race conditions to launch such attacks and provided some insight about script accenting technique he has developed to counter them.

The talk was well worth the time!

One last thought about it...If hackers can exploit browser inconsistencies (may be bugs) with the black box techniques (they know the general techniques that all browsers use), I wonder how many more attacks we would have seen if proprietary browser codes are made public. Doesn't it violate one of the pillars of security?: "the security of a mechanism should not depend on the secrecy of its implementation".

Talk: Managing Uncertainty Using Probabilistic Databases

With the frequency (Nyquist-Shannon sampling theorem should give you an idea of how fast I need to be to keep up with the information in flow :-) of talks I am attending to and the palate full of other work, I usually don't get much time to randomize ;-) those talks. Enough about beating the bush..let's get started with the $subject..

Talk #1 for the day: By Nilesh Dalvi (PhD Candidate, University of Washington) who is being considered for a faculty position at Purdue. It's that time of the year at Purdue where many faculty candidate talks are organized and I usually don't miss any talk that touches my area of interest. (One incentive to go there is you get free food..I am just kidding :-). Getting even a slot to be considered as a faculty candidate at Purdue is quite these guys are really really prepared with their stuff). Although databases/data mining is not my main area of research, I do find it interesting. His talk was centered around data mining approach to measuring uncertainty more or less objectively in information retrieval from probabilistic databases. Well, he presented what he ate, drank, slept on for the last 5 years. You can get direct access to most of the papers he authored/co-authored from his home page. His talk rekindled my liking about probability and statistics techniques I learnt back in high school (well we call it GCE advanced level) and then later in undergrad studies at the University of Moratuwa. OTOMH, some of the highlights of his talk:

-how do we rank query results from a probabilistic databases?
-how do we efficiently evaluate queries on probabilistic databases? (they have an implementation adding some more syntax to existing SQL)
-how do we reason about the privacy in data sharing with many published views (snapshots if you will) from a database irrespective of how large is our sample?
-how do we come up with optimization techniques for queries which are NP-complete?

Seminar Bingo!

I've been attending quite a few seminars (cs faculty candidate talks, cerias security seminar series, industry talks, etc)..I am thinking if I should take this card with me ;-) to rate the talks. For more cards click here or here ;-)

Friday, March 23, 2007

Top 10 Reasons to Major in Computing

I am marketing my own field of studying ;) What to know what ACM has to say about the $subject? Go right ahead!

Wednesday, March 14, 2007

XML Schema or RELAX NG (RNG)

We all know that the following limitations in DTD pushed for a new schema standard.

  1. DTD is expressed in its own language. We need to master another set of notations to work with DTD.
  2. We have no way to specify data types and data formats.

To overcome these issues, XML Schema (W3C standard) and RELAX NG (OASIS standard) were introduced. Both of them are based on XML itself.

There’s been a schema war among industry experts for a quite some time. You won’t disagree with me that XML schema enjoys a wider acceptance in the current software market (at least in the areas I am working on). But, which one is better, XML Schema or RELAX NG? I being a novice in this area, went on to browse through old archives at IETF XML USE newsgroup.

James Clark's post to promote RELAX NG is quite lengthy but philosophical. The follow up post by Eric Sedlar’s (Oracle Corporation) to defend XML Schema also has some valid points.

I agree that RNG is easy to understand with this excellent tutorial. At the same time, I was able to grab the concept behind XML Schema (which I currently use) quite easily. On the surface, it seems to me that the major difference lies in the style: while XML Schema categorize patterns into distinct components such as elements, attributes, complex/simple types, etc., RNG introduces generic patterns. I need to dive deep into both schemas in order to provide an objective and practical comparison. Wait..what about Schematron? I like the fact that it (schematron) does not impose any ordering of sibling elements (in contrary to XML Schema and RNG), which is quite valid and sufficient for data centric applications.

Friday, March 9, 2007

The World's Billionaires

Wow, according to Forbe's special report, there are over 900 billionires in the world! It's about 20% more than the previous year. Bill Gates continues to top the list for the 13th time. Indians top the list in Asia, with 36 billionires.

Although we see a rise in billionires, the question we need to ask is 'has the condition of the poor improved over the year?' I think the opposite has happened. One thing these people can do is to translate their success to more jobs, striking a balance between the rich and the poor.

Wednesday, March 7, 2007

Where is XML going?

I found the post by Kurt Cagle about current trends in XML fascinating..

Here's a summery I jotted down.
- rise of XML technologies - XSLT (XQuery and XForms are still a couple of years out from wide adoption) (and of couse job listings)
- XHTML becoming "standard" at least at the corporate level
- rise in demand for ontologists and RDF specialists (harnessing the ability to create metadata structures)
- transitional shift towards the declarative web architecture
- rise of JSON, E4X, Linq (unlikely that these languages will replace XML)
- XML Binding languages will be the next arena of development (and contention) - the ability to assign a behavior to an XML tag is profoundly useful, and provides both the bones of the declarative structures and the muscles of the imperative one, while keeping the presentation layer safely off to one side (AJAX).
- more commercial level XSLT2 transformations
- the interesting things being done in XML are increasingly occurring in the application and vertical markets
- HL7 in the health care field
- GML in the mapping and geographical location space
- XBRL or UBL in the business space
- while RDFa may have a fairly major hill to climb in terms of adoption, it will likely end up becoming integral to the semantic web fairly soon

Link: where is xml going?

Isn't this a good example about 'history repeats'?
The idea of declarative programming has been around for about half a century. Until I started to learn ML last year, I thought I knew fairly well about programming languages, which turned out to be wrong. Why is declarative programming in the form of Lisp, Haskell, ML, etc not commercially successful these days? The main reason might be that compared to imperative languages such as C, C++, Java, etc. it's harder to come up with a solution, at least initially or it may be that I am so used to imperative languages that I couldn't do away with it initially. Is there a marketing glitch attached to it as well (for not succeeding)? Having said all this, now we see a rise in XML declarative behavior bindings..isn't it a good example of 'history repeats'!

Transferring Terabytes of data from point A to point B

Read an interesting BBC news post. Chris DiBona, open source program manager at Google, talks about Google’s open source effort towards overcoming the problem of sending huge amount of data across network. The idea has been inspired by the work (Microsoft TerraServer) done by Jim Gray et al., the father of satellite mapping on the web (I am not sure if they have found Jim who went missing at sea).

Here’s the abstract of Microsoft TerraServer: a spatial data warehouse, published in Proceedings of the 2000 ACM SIGMOD international conference on Management of data.

Microsoft® TerraServer stores aerial, satellite, and topographic images of the earth in a SQL database available via the Internet. It is the world's largest online atlas, combining eight terabytes of image data from the United States Geological Survey (USGS) and SPIN-2. Internet browsers provide intuitive spatial and text interfaces to the data. Users need no special hardware, software, or knowledge to locate and browse imagery. This paper describes how terabytes of “Internet unfriendly” geo-spatial images were scrubbed and edited into hundreds of millions of “Internet friendly” image tiles and loaded into a SQL data warehouse. All meta-data and imagery are stored in the SQL database.

TerraServer demonstrates that general-purpose relational database technology can manage large scale image repositories, and shows that web browsers can be a good geo-spatial image presentation system.

Monday, March 5, 2007

How the Open Source Movement Has Changed Education: 10 Success Stories

This is related to my previous post. An interesting article highlighting major open source projects that have changed the shape of conventional education.


It's been nealy six years since MIT made most of its course materials free under the initiative OpenCourseWare (aka OCW). The current repository has more than 1500 courses to choose from.

Tuesday, February 27, 2007

8th Annual CERIAS Symposium

The dates for the $subject getting closer. If you're somewhere close to Indiana and want to see current and future trends of information security, you should consider attending this.


Monday, February 26, 2007

Post Removed

Sometime back, I posted a lengthy note on diabetes and some pointers to the sources from where I got the information in the hope that it'll help those suffering from diabetes. One reader, called Ahamed Nadeem (I don't know who he is) pointed out that some information and sources are incorrect. I double checked the validity of the sources and found that he was indeed correct. I removed that post from the blog. I should be very careful in writing about sensitive things like health. Many thanks to Nadeem.

First Formalization of NP-completeness

While I was looking at the subset-sum-problem (in cryptography, we call it knapsack), I came across the original paper (The Complexity of Theorem Proving Procedures) submitted to 1971 ACM Symposium on Theory of Computing by Stephen Cook who formalized the notion of NP-completeness in this paper. The paper left unsolved the greatest open question in theoretical computer science - whether complexity classes P and NP are equivalent.

Monday, February 19, 2007

Global Warming [Earth] Challenge

Watched the media conference with Richard Branson of Virgin Group which owns Virgin Atlantic Airways and Al Gore (Former US Vice president and the person behind the documentary "An inconvenient truth") sitting next to him? It's quite interesting..Virgin Group is gonna award $25 million for an invention that removes non-trivial amounts of greenhouse gases (mainly CO2) from the atmosphere. For a convincing proposal, they'll initially award $5 million and the rest in installments provided the laid out milestones are achieved. By the way, they are also pledging billions to research into bio-fuels.

It's kinda conflict of interest..running a large airline company accounts for significant emission of greenhouse gases. However, it's better than doing nothing to compensate for the damage caused.

I'm sure this'll get many universities involved and interested.

All scientists and inventors, get working!

Sunday, February 18, 2007

SOA Forecast for 2007

Here's what ZapThink forecasts:

#1: SOA Quality and Testing Market the Next Must-Have Space
- More testing and quality tools and solutions for SOA
#2: Enterprise Architect Drought
- There simply won’t be enough qualified and SOA experienced enterprise architects (EA) around
#3: The SOA Suite Busts a Few Buttons… Open Source the Remedy?
- SOA suites might get a bit too big (Vendors of a service or a product have a self-interest in promoting the need for products that probably won’t keep you away from their sales department for long)
-Leveraging the open source SOA realizations such as Apache Axis2 (java/c)

Read the full post.

Sanjiva's views on WS-* and REST

Sanjiva Weerawarana was recently interviewed by InfoQ. Here's his opinion on the subject.

Valentine's Day Reminder for Secret of Success

(Courtesy: Creating Passionate Users)

Read Kathy's valentine's day message :)

Monday, February 12, 2007

Web Services Performance, design considerations and ease of use

Employees at WSO2 published performance results of Axis2 vs. XFire along with the source code code used and the performance testing methodology. The Bileblogger criticized this with racial comments which I don't want to elaborate on here. This led to a series of nasty exchange of words. Shortly after this, Sun's JAX-WS benchmarked against Axis2. All this led to some constructive feedback and the question of usefulness and relevance of these benchmarks. Steve Laughran argues that one cannot realistically infer any result from these performance numbers. Anil John points out that these performance tests are not in par with the design considerations of web services. There's a point in what Anil says. Web services should simply be xml-in and xml-out in order to encourage interoperability and take advantage of powerful xml representation. Axis2 has in fact designed for the xml-in/xml-out model. This comes at the cost of ease of use, which is why data binding frameworks such ADB, JAXB, JiBX, etc have been introduced. We need to strike a balance between performance and ease-of-use. We need to come up with benchmarks that go well with design considerations (such as message based programming over RPC style, asynchronous web services, large data transfer, etc).

Sunday, February 4, 2007

Social Media

Enterprise 1.0 vs. Enterprise 2.0

From The Software Abstractions Blog..
What exactly is "Enterprise 2.0"? The main themes that are emerging, seem to be products/technologies that:
  1. facilitate better communication
  2. harness collective intelligence and innovation
  3. enable user-generated applications (enterprise mashups !)
The communication aspects present the most powerful paradigm-shift.

(Courtesy: Software Abstractions blog)

Source: click here

59 years ago on a day like today

In 1948, on a day like today, with the participation of representatives from many countries, the Lion flag was hoisted high in the blue skies proudly marking the symbolic gesture of independence.

The Lion Flag:

More Details:
About Sri Lanka
About Sri Lankan history

Saturday, February 3, 2007

5 Ways to spot a liar!

Little white lies of all sorts are tossed our way daily. These may not matter a whole lot, but real whoppers do. Sometime back, I read an article on RD regarding how to spot a liar. I know I am not very good at spotting those. Here are the five tips from experts from that article :)

1. Hear the Voices
- Voice (pitch) changes may well indicate deceit.
- And so does change in speech rate, breathing pattern, etc.

2. Watch Those Words
- how do we spot lies in written material, letters, resumes, etc. ??
- Believe it or not, people have developed software to spot deception in written content. One example is LIWC (Linguistic Inquiry and Word Count) by some people at the University of Texas.
- Some tips.
-- liars tend to use fewer first person pronouns
-- liars tend to use fewer exclusionary words - but, nor, except, whereas (they have trouble with complex thinking)

3. Look Past Shifty Eyes
- If people look away while answering something that should be easy to answer, that may be an indication (this may not be always true) out for eye gaze

4. Bet Better at Body Language
-Observe the total person and compare it with his/her usual body language
--ex: a quiet person who talks a lot or a person who talks a lot who is now quiet

5. Check for Emotional "Leaks"
-The micro-expressions that flit across people's faces often expose what they're truly feeling or thinking as opposed to what they'd like us to believe. It isn't the frequency of a smile that matters, but the type of smile!

Professionals trained in the art of lie detection use all these techniques.

Can you spot a liar? Take the quiz!

Friday, February 2, 2007

Introduction to C++ Programming [Part 4]

Previous Related Posts: Part1 Part2 Part3


An operator is a symbol that causes the compiler to take an action. Operators act on one or more operands. There are several types of operators.

· Arithmetic operators

· Assignment operators

· Relational operators

· Logical operators

Arithmetic operators






3 + 20 evaluates to 23



10 – 4 evaluates to 6



2 * 13 evaluates to 26



15 / 2 evaluates to 7



17 % 3 evaluates to 2



i++ is equivalent to i = i + 1



i-- is equivalent to i = i - 1

++ and -- are unary operators, meaning that they operate on a single operand. Each of these operators has two versions, post and pre. Let’s try to understand this by the following example.


//post increment
x = 10;
y = 20;
z = (x++) + y; //z evaluates to 30
a = x + y; // a evaluates to 31

//pre increment
x = 10;
y = 20;
z = (++x) + y; //z evaluates to 31
a = x + y; // a evaluates to 31

Operator precedence

Precedence is the order in which a program performs the operations in a formula. Each operator has a precedence value. These precedence values are used to evaluate expressions. If one operator has precedence over another operator, it is evaluated first. The following table shows the operator precedence in the decreasing order.

( )

Prefix ++ --

* / %

+ -

< <= > >=

== !=




z = 2 + 5 * 3; //17
z = (2 + 5) * 3; //21

Assignment operators




a = 5;


a *= 5; //equivalent to a = a * 5;


a += 5; //equivalent to a = a + 5;


a -= 5; //equivalent to a = a - 5;


a /= 5; //equivalent to a = a / 5;


a %= 5; //equivalent to a = a % 5;

Relational operators




Equal to


Less than or equal to


Greater than or equal to


Less than


Greater than


Not equal to

Logical operators


Equivalent to




(x > 5 ) && (x <>



(x <> 25) //x is less than 5 or greater than 25



!(x > 5) //equivalent to (x <= 5)

Escape Characters

The C++ compiler recognizes some special characters for formatting. The following table shows the most common ones. You put these into your code by typing the backslash (called the escape character), followed by the character.




new line






double quote


single quote


question mark






cout << “This is a test. \n”;
cout << “Now alarm \a will ring\n.”;

Control Structures

If Statement

If statement is used to execute a certain set of statements based on an expression. There are several forms of if statement. Most commonly used forms are listed below. You don’t need to use braces if there is only one statement.

if (expression)

if (expression)

if (expression)

else if (expression2)



if (x != 0)
cout <<”x is not equal to zero\n”;
cout <<”x is equal to zero\n”;

Conditional (Ternary) operator:

The conditional operator (?:) is C++'s only ternary operator; that is, it is the only operator to take three terms. The conditional operator takes three expressions and returns a value.


(expression1) ? (expression2) : (expression3)

This line is read as "If expression1 is true, return the value of expression2; otherwise, return the value of expression3." Typically, this value would be assigned to a variable.


int x = 5;
int y = 10;
int z = (x > y) ? y : x; //since x > y is false, value of x is assigned to z.

While loop

A condition is tested, and if it is true, the body of the while loop is executed. When the conditional statement fails, the entire body of the while loop is skipped. It is possible that the body of a while loop will never execute. The while statement checks its condition before executing any of its statements, and if the condition evaluates false, the entire body of the while loop is skipped.


while (condition)


int iCount = 0;
while (iCount < 2)
cout << “Count is “ << iCount <<"\n";



Count is 0
Count is 1
and break statements:

Usually break statement is used to break out of any loop based on a certain condition and continue statement is used to skip the current iteration based on a certain condition.

e.g1. (break):

int iCount = 0;
while (iCount < 2)
cout << “Count is “ << iCount <<"\n";
if ( iCount == 0)
cout <<”Breaking out of the loop\n”;


Count is 0
Breaking out of the loop

e.g2. (continue):

int iCount = 0;
while (iCount < 3)
cout << “Count is “ << iCount <<"\n";
if ( iCount == 1)
cout <<”Continue with next loop\n”;

cout << ”Next count is “ <<iCount << "\n";



Count is 0

Continue with next loop
Count is 1
Next count is 2
Count is 2
Next count is 3

Do-While Loop

The do-while loop executes the body of the loop before its condition is tested and ensures that the body always executes at least once.


while (condition);


int iCount = 0;
cout << “Count is “ << iCount << "\n";

while (iCount > 0);


Count is 0

for Loop

You'll often find yourself setting up a starting condition, testing to see if the condition is true, and incrementing or otherwise changing a variable each time through the loop. In such situations, for loops are more convenient way to iterate. A for loop combines the three steps into one statement. The three steps are initialization, test, and increment.


for (initialization; test; increment/decrement)





for (int iCount = 0; iCount < 2; iCount++)


cout << “Count is “ << iCount <<"\n";


Count is 0
Count is 1

switch Statement

switch statement is a more convenient way of branching than if statement, when you want to branch to a set of statements based on several possible values.


switch (expression)
case value1:
case value2:


Expression is any legal C++ expression. If one of the case values matches the expression, execution jumps to those statements and continues to the end of the switch block, unless a break statement is encountered. If nothing matches, execution branches to the optional default statement. It is important to note that if there is no break statement at the end of a case statement, execution will fall through to the next case statement. This is sometimes necessary, but usually is an error.


int iScore; //assume iScore is assigned a value through cin

cout << “Score is “ << endl;

switch (iScore)


case 5:

cout << “Performance is average” << endl;


case 10:

cout << “Performance is good” << endl;



cout << “Invalid Score” < < endl;