Category Archives: Information

General news/information to the CAcert community or about security in general

Gemplus Kit

I had a few more PKI cards turn up today, so far no luck with those either, even though they have been pre-loaded with a GemSAFE image. I plan to contact the local distribtor in the morning to see if we can nut something out. I did take a stanley knife to the GemSAFE card I have to fit it in the GemPC Key reader, and it works quite nicely.

Below are some photos I took earlier on tonight.
Continue reading

Is it finally time to sound the death knell to passwords?

Security mechanisms can be defined in the following ways “something you know”, “something you have” and “something you are”.

Passwords are something you know
PKI cards/tokens are something you have
Biometrics is something you are

The problem I have with biometrics is you can’t change the tokens, and this can be bad for a number of reasons. For example, some new cars come with a biometric reader so they can claim they are harder to steal, but as one proud new owner found out it just makes criminals hurt you more, so now he doesn’t have a car and he has one less finger, that’s right, they stole his car and cut off his finger as well!

My preference lies with something you have, that is PKI hardware, which in most cases also requires a PIN, which is something you know, which adds up to 2 factor authentication. The beauty of this system is that the PIN and the card by themselves are useless, having the card by itself is useless because if you get the PIN wrong 3 times the cards will lock themselves to prevent brute force attacks, and of course the PIN by itself is pointless.

And so begins my epic tale of getting PKI hardware to work with Linux, and the difficulty I encountered highlighting one of the many reasons PKI hasn’t taken off in a big way.

This week I met up with a nice gentlemen, who happened to be the distributor for Gemplus products in Australia/New Zealand, and was kind enough to give me a few of their products for evaluation purposes. I believe others have also managed to get evaluation kit from Aladdin as well, check the main mailing list archive for details on that.

In any case this was my first look at any kind of PKI based hardware, and as per usual for Linux driver support and integration between applications leave a lot to be desired, but the lack of coherent documentation was an even bigger headache.

Read on for more Continue reading

Conundrum

One FUD issue some people keep regurgitating to keep us from being included in browsers is they worry about us issuing certificates for the likes of paypal.com, most people pushing this line tend to neglect to mention that issuing a certificate on it’s own is mostly useless, unless you can attack the host file on a users computer or the DNS name system, in which case there is bigger problems then falsely issued SSL certificates, especially since most phishing attacks (which is the assumption likely to abuse this) don’t even resort to using SSL.

Currently we require people to have code signing access before issuing IDN/punycode domain/email certificates, and it has been suggested that we have a similar requirement for anyone requesting certificates for high profile sites.

One way to determine popularity is by sites like alexa.com which give out rankings.

I guess the question is how popular must a site be if we want to enforce this, and over what time period?

Another concern is with large organisations as a lot of departments inside these organisations run their own sub-domain and the TLD is handled usually by the main IT department, and this could be cause for concern if someone registers the TLD and starts getting certificates for either the entire organisation or for sub-domains they shouldn’t be allowed to control, this is usually controlled by an organisations IT policy, but this call also lead to someone intercepting traffic by setting up a reverse proxy, and there is questions hanging over this as it will potentially effect legit users one way or another.

Browser exploit v SSL root key in the wild

Many people have cited the reason for excluding us is based on our perceived ability to protect our root certificate and in fact most consider it worst then a critical browser exploit, but the more I think about this, the more I’m convinced this is just wrong, so I went to the trouble of trying to break the situation down logically, and here’s my risk analysis of the situation:

A browser exploit can effect all users of a particular browser (mozilla says 50mill so I’ll run with estimates based on that).

Browser exploits are pretty clear cut to calculate and would have the potential base of 50 million users to exploit.

A bad certificate on the other hand, the numbers aren’t so clear and you have to do some educated guessing as to what the risk would be closer to.

Without any more specific details of region break downs I’ll have to assume that the 50 million users are evenly distributed more or less on eastern and western Europe, North America, some parts of Central and Latin America and the Asia Pacific regions.

We also have to assume that most banks are either very geographically specific, or at most have a website on a per country basis and they operate different sites in different countries.

To exploit DNS effectively you either have to control a root name server or be able to exploit individual name servers of ISPs in a concurrent fashion. The banking industry and large merchants already pay large sums of money to be notified of DNS based attacks, so the risk here is going to be mitigated some what compared to normal merchant sites, and if we’re talking about normal merchants the threat is considerably lower due to lack of continuous contact that people would have, compared with their banks, and of course replication of the entire shopping cart since you need to make product selection before purchasing.

Ok, so if we evenly distribute the number of firefox copies over 6 areas and assume a penetration rate about equal we end up with about 8 to 10 million users in each location, the above numbers are spread over multiple countries so we’ll assume for the time being that at most, there are approx 3 million users in any given country.

Further to that the potential number of users likely to be effected by a DNS based attack is in the 100’s of 1000’s at most (I’m being generous, more then likely it will be MUCH less) for a banking website used nationally. To attack companies like Amazon.com or ebay.com you’d have to replicate the entire shopping cart system, of which there are easier attacks currently being deployed.

So a browser exploit is likely to effect: 50,000,000
A root certificate breach is likely to effect 100,000 or less, and that’s based on the assumption of a successful DNS breach on a mass scale, where a browser exploit may only need the user to visit a web page.

So the difference between a browser exploit having a detrimental effect or an SSL root cert exploited is somewhere in the vicinity 500x greater, although this easily could be 5000x or more depending on what figures you based your breakdown on, how proactive the bank is preventing other forms of attack so on and so forth.

Just one final note, if the domain is hijacked or even just DNS spoofed you don’t need have a root cert escape into the wild there are plenty of CAs already in the browser root stores that will issue control of domain certificates including Verisign via Thawte 123, Geotrust and Godaddy to name but a few, and this is part of the reason banks employee the services to prevent DNS based attacks, although the real reason is the fact people just don’t take enough care and verify they are connected by SSL before sending sensitive information.

So no matter how the above risk is twisted with FUD, the facts are that an SSL root key loose in the wild is highly over rated due to other factors mitigating risks.

Pushing Ahead

I think one of the key things that will help push things forward is and always has been our web of trust, while on the surface it seems like just a way to unlock features on the website such as increasing the length of time certificates are valid, or getting your name/company details onto certificates. However in my opinion the big advantage, which is currently largely unrealised, that our web of trust over most other internet identity schemes is that it is technology neutral, and this gives us an advantage over things like the PGP web of trust which is locked to a specific technology. If at any time in the future they move away from PKI and x509 certificates we can simply integrate the technology into our system and keep going like we always have been.

Also an insightful comment submitted on the previous post suggested that if we managed to convert 10% of self signed websites, we’ll be well on our way to having a lot more mind share, and I guess this is where things like inclusion in the Debian ca-certificates package start to become significant, because this can then be leveraged to make it easier for certificate chaining once we start offering the option to have certificates signed by a higher trust certificate for those with 50 points or more, which hopefully will be within the next 24 hours all going well!

So I guess this begs the question if anyone have any contacts with any distros or other high profile sites using self signing, that we can talk to or if you’re able to convince them to start getting their certs signed by us it would push things along that little bit further that little bit quicker, and of course getting assured will improve the over all trust of the network, especially the more inter-twined it becomes.

Unassured Certificates Updates

I really don’t think a consensus was reached over the issue of what to do with unassured people issuing certificates. Long term my thoughts were to disable most functions from anyone not assured, but that requires some kind of critical mass which I just don’t think we have at present.

I think intent plays a big part here, by that I mean if you are utilising CAcert for certificates to secure emails, or even just IRC you’re more likely to stop using CAcert and go back to self signing if you’re not assured, where as those pushing to stop having unassured people issue certificates tend to fall into one of two camps, those assured, or those that think SSL can in some way greatly increase security, and I guess to an extent those already assured are perhaps leaning towards the latter as well.

So I guess here is my challenge to those wanting for a more secure internet, get out there and start assuring more people, because this then leads to critical mass which allows us the ability of making decisions based on merit rather then knee jerk.

Conflicts of interest

What seems like a relatively simple thing in the SSH world, a simple warning about fingerprints changing, you think would be a very invaluable thing in the SSL world as well. The reason this should be important is because of a simple problem, Verisign has the potential to do a lot of things after all they’re the ones left guarding a couple of root name servers (which controls the entire DNS system), they also directly have control over several TLD zones (.com/.net) and to top things off they have their root certificates in every browser out there.

However that’s just the tip of the ice berg, Verisign is also one of the largest providers of snoop services for the US Government, and anyone else able to afford their services, and this is where a warning would come in handy.

If Verisign decided to redirect your traffic from from your website by making their name servers authoritative for your domain and then point your host name to their proxy server which in turn relays your traffic to your proper website, they could. But wait, you’re using SSL on your web-mail so no one can listen in to your passwords and/or emails, right? WRONG! This is where my earlier point about Verisign having their root certificates in browsers comes in, they can issue a new certificate for their proxy server for say “*” (a wild card for any domain/host name/everything) and if you connect to HTTPS the browser will never warn you that you are hitting a different certificate, and that all your traffic is now being proxied, captured, proded, poked and anything else you can think of.

What’s that I hear you say, that it’s illegal to intercept traffic like that, think again, the NSA is doing it to all traffic going across US international borders and have said that any and all is fair game to it. Not to mention the US has some interesting laws that they could force Verisign into doing it, then throw in a gag order for good measure!

So back to my original statement, if browsers cached and checked that certificates haven’t changed, then any kind of interception like this would then cause the user to be warned in an active sense, rather then the passive method of trying to view who issued a certificate.

This then leads to an interesting dilemma for browser developers, if you on one hand state all traffic that goes via SSL/TLS is encrypted and can’t read by others, and then on the other hand it really can, doesn’t that make you liable for making misleading statements?

The guys working on Mozilla software doesn’t seem to think so, nor do they seem to think it is a good idea to actually provide security for their users, they’d rather you lived under a rock and think commercial certificate authorities are such perfect entities. So far I’ve been met by nothing but a wall of silence on their news groups when the topic comes up, and when I filed a bug report, it was hastily marked as invalid.

I’ve said it before and I’ll say it again, I trust SSL in most browsers as far as my credit card number, I certainly wouldn’t be trusting it if I were a Chinese dissident!

CAcert now has a fully functioning OCSP setup

CAcert now has an operational OpenCA (RFC compliant) OCSP Responder. All the certificates that were issued after 2005-05-16 should have the OCSP Service URL automatically included, and your OCSP client should check periodically for certificate status. The OCSP responder issues signed responses over http, (the OCSP address is not a normal website and you can only connect to it with an OCSP client, such as OpenSSL) once your client is running you can tell it to connect to http://ocsp.cacert.org or http://ocsp.cacert.org:2560.

To activate OCSP in firefox use the below settings.

  • Click on the Tools menu, then select Options.
  • After the Options window appears, select Advanced.
  • Scroll down until you get to the Validation section, OCSP will be the last option.
  • By default “Do not use OCSP for certificate Validation” is selected. Change to the second option, “Use OCSP to validate only certificates that specify an OCSP service URL”.
  • Click OK to close the Options window.

We run our OCSP Responder on port 2560 (OpenCA default), we also make this available as a vHost in Apache on port 80, which will be important for anyone stuck behind a firewall and unable to connect to ports other then 80 or 443.

We now issue all certificates with the OCSP Responder URI address listed as http://ocsp.cacert.org and we plan to distributed servers around the world via round-robin DNS. If/when load or bandwidth become a problem in the future, we can simply add more OCSP responders in a similar fashion to adding secondary name servers (DNS), and it would seem things could easily be made highly distributed with our current configuration.

Due to the threat model used in developing the RFC for OCSP, high availability will be a key issue in running/maintaining OCSP services. Any clients with OCSP turned on will fail to connect to any site whose certificate doesn’t have a valid OCSP response. As far as I’m aware mozilla products currently do not have any form of OCSP caching, so reading signed/encrypted email on a plane in Thunderbird could be difficult at this point in time. Microsoft is apparently developing an OCSP client for it’s next version of Windows/MSIE that apparently does some caching, although it will be interesting to see how well this works.

Domains with CAcert in them.

A question came up the other day, what policy do we have about 3rd parties registering domains that contain the word “CAcert”. In the past few days I know of a few more domains either registered or applied for, and while I can assume (hopefully correctly) most people will do the right thing and they are applying for them to either point back to our main website or to localised content and support, there is of course like everything else in life the potential for abuse.

Up until now there has been reasonably small risk of anything bad occurring, but as things progress this is something that should be taken serious as it will reduce efforts to market ourselves, as these sites unless labelled correctly could be mistaken as an official off shoot, when in actual fact only the main website is the only site we officially operate.

The mozilla guys have recently published their trademark policy, and one clause included an item on domain names that contain mozilla trademarks.

If you want to include all or part of a Mozilla trademark in a domain name, you have to receive written permission from Mozilla. People naturally associate domain names with organizations whose names sound similar. Almost any use of a Mozilla trademark in a domain name is likely to confuse consumers, thus running afoul of the overarching requirement that any use of a Mozilla trademark be non-confusing. If you would like to build a Mozilla, Firefox Internet browser or Thunderbird e-mail client promotional site for your region, we encourage you to join an existing official localization project.

To receive written permission, please download and follow the directions as outlined in the Domain Name License.

There are a lot of examples out there of other community projects spawning domain names based on the original project for localisation/regionalisation purposes, and in at least one case, the Plone Foundation requires all domains to be handed over to them.

The simple solution might be to offer <country code>.CAcert.org (such as br.CAcert.org) or one of our other domains (.com/.net) and ask everyone nicely to refrain from purchasing confusing domains and to request a sub-domain from us instead.

I’m guessing we need to start thinking about official policies on other things as well that might be used against the spirit in which they were originally created.

All in the name of marketing

So you’re a commercial certificate authority and you’re looking to provide an edge over other companies doing the same thing, so why not offer some kind of insurance!

Well that’s exactly what Godaddy has done, they’re offering US$1000 warranty, but it’s the same snake oil warranty that most other CAs offer.

So anyway, Gerv from the Mozilla foundation makes this nice little post to the mozilla news group today about how he couldn’t find out from their website exactly what it covers, so he decides to phone them up and ask them.

Long story short, the comments made by the sales representative says it all:

“Well say, for example, I own www.happycompany.com and I have a Verisign certificate. Then, a fraudster registers www.happy-company.com, gets a certificate from you and rips off my customers. Is that situation covered? Would you pay out?”

“Well, no. You see, we’re not securing you, we’re securing the other guy. You have to be registered with us.”

and this;

“Have you ever paid out under the warranty program?”

“No. It’s really there just to reassure you that it’s a true 128-bit certificate, and to make you feel better about purchasing it.”

I’m really not all that surprised by this I guess, as this is the same snake oil that’s been pushed and marketed all along really.