Archive

Educational

My #1 motto currently is “Don’t do something until you understand what you’re doing“.

Software developers are often tasked with doing something they don’t understand. This is normal in the enterprise world, particularly where “segregation of duties” is fashionable. When requirements are hashed out between business people and requirement engineers, designs between requirement engineers and architects, and implementation concepts between those architects and product responsibles – the developer is mostly invited very late to the party. The developer is put under pressure to deliver by the entire enterprise software delivery apparatus and if the developer then starts asking questions, this invariably is “frowned” upon by the powers that be. This pressure psychology automatically stops developers asking necessary questions and a culture of “just do it” takes over. The results are unsatisfactory for everyone – the delivery quality suffers and developers do not learn and grow as much as they could to satisfy the needs of the larger corporation.

If everyone tried to work by this motto, and let work by this motto, then I’m sure IT projects would be more successful.

Personally i make every effort to really understand what i’m doing, and acting as scrum master or in roles where work is prepared for others, i put making things understandable for others at the top of my agenda.

One excellent guide to hardening an Apache WebServer’s SSL ciphers this article. The guide is following the best practices document in order to pass the Qualys “PCI-DSS” compliance check with straight-A’s.

To cut to the “answer” – the guide suggests using the following OpenSSL cipher list

openssl ciphers -v ‘ECDH+AESGCM: DH+AESGCM: ECDH+AES256: DH+AES256: ECDH+AES128: DH+AES: ECDH+3DES: DH+3DES: RSA+AES: RSA+3DES: !ADH: !AECDH: !MD5: !DSS’
which gives (OpenSSL 1.0.1c 10 May 2012):
ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:…long list…:AES256-SHA256:AES256-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:DES-CBC3-SHA

Unfortunately this long ordered cipher list cannot be used directly in a Java WebServer configuration – due to the JSSE using standard names as defined in the TLS Cipher Suite Registry. The hint where to find the registry I found from the java CipherSuite sourcecode.

I looked for a utility somewhere to map the names from OpenSSL to JSSE without any luck. Thankfully the TLS Cipher Suite Registry allows you to download a CSV file of the official codes and names of the suites into a file called “tls-parameters-4.csv”. The openssl ciphers “-V” option outputs one line per OpenSSL cipher suite name including the official name used by JSSE. So with a few lines of shell scripting, the mapping can be automated.

$ cat openssl2jsse.sh
#!/bin/bash
CODE=`openssl ciphers -V | grep $1 | sed ‘s/ //g’ | cut -d ‘-‘ -f1 `
grep $CODE tls-parameters-4.csv | cut -d ‘,’ -f3

$ cat resolve.sh
#!/bin/bash
COMBINEDLIST=
while read line
do
ENTRY=`./openssl2jsse.sh $line`
echo $ENTRY
COMBINEDLIST=$COMBINEDLIST,$ENTRY
done
echo “ciphers=”$COMBINEDLIST

$ openssl ciphers -V ‘ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AES:RSA+3DES:!ADH:!AECDH:!MD5:!DSS’ | ./resolve.sh

ciphers=,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,…long list…,TLS_RSA_WITH_3DES_EDE_CBC_SHA

ta da. It’s been a while since i did any shell scripting – and i’m so proud of the result i’m posting the result on .

Setting up a HTTPS web server is not a trivial undertaking. The choice of cipher suites which the server can be configured to support allows for a great number of choices. I have come across “best practices” guides on the internet which a practitioner can use to decide which SSL ciphers to allow. There are also several possibilities to test a HTTPS webserver for vulnerabilities and “compliance” to best practices. Eventhough i have been working with these SSL concepts for many years – i realized that i did not know the following:

Is the choice of SSL cipher suite ( chosen during the SSL handshake ) affected / influenced by the type of key used for the server certificate?

There are typically several possibilities to choose for the public/private keypair used when requesting a SSL server certificate from a commercial CA. Generally, either an RSA or a DSA key is used, but other types exist – like ECDSA. Does the TLS cipher suite negotiated in the SSL handshake limited by the type of the server’s keypair?

In a nutshell, the answer is YES, but the reasons are fairly complex. The TLS handshake starts with a client indicating with the HelloClient record, a list of ciphers which it supports to the server. This takes place even before the client knows what kind of certificate the server has and what ciphers the server is going to offer. The server replies with the HelloServer record, indicating the “chosen” cipher suite. If the client and server don’t have some overlapping set of ciphers the handshake will not complete successfully. Taking the description of the SSL handshake for the AES128-SHA cipher suite from this article on how to setup SSL with Perfect forward secrecy, it states that the pre-master secret is communicated from the client to the server by encrypting it with the public key of the server ( so that only the server with the private key can decrypt ). This functioning is true for RSA public/private keypairs which support encryption. Taking a closer look at the suite using the openssl ciphers -v gives more insight into how the cipher suite is “composed”.


$ openssl ciphers -v cipherlist AES256-SHA:AES128-SHA
AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1
AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1

The “Kx=RSA” indicates that RSA is the key exchange mechanism and “Au=RSA” indicates that RSA is the authentication mechanism. It is the use of RSA as the key exchange mechanism which limits the cipher suite’s use to servers with SSL certificate containing RSA public key. The cipher suites based on RSA key exchange can be listed with openssl ciphers -v cipherlist kRSA.

So what is the effect of server key choice on PFS cipher suites, like ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:EDH-DSS-DES-CBC3-SHA. From

$ openssl ciphers -v cipherlist 'kEDH:!ADH'
DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256
DHE-DSS-AES256-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(256) Mac=SHA256
...
$ openssl ciphers -v | grep EDH
EDH-RSA-DES-CBC3-SHA SSLv3 Kx=DH Au=RSA Enc=3DES(168) Mac=SHA1
EDH-DSS-DES-CBC3-SHA SSLv3 Kx=DH Au=DSS Enc=3DES(168) Mac=SHA1
...

it would seem that most EDH and DHE suites work with both RSA and DSS server keys.
and

$ openssl ciphers -v cipherlist ECDH | grep ECDHE
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD
...

the EC cipher suites work with RSA and ECDSA server keys.

It would seem from the overall support of different cipher suites – RSA server keys are most versatile. RSA server keys work with RSA, ECDH, and DH key exchange mechanisms, where DSS keys are lacking the support of EC in the openssl 1.0 currently.

Now although the server key type does influence the cipher suites which can be used during TLS handshake, the Client Authentication is “orthogonal” to the establishment of a secure session. The client authentication just relies on the client signing some data exchanged in the handshake with the client’s private key and the server checking the signature against the client’s public key. This means that HTTPS clientAuthentication can use any type of client key, irrespective of the server’s key type ( as long as the server can check the client’s signatures ).

Don’t forget you can test a running server’s TLS handshake anytime with the openssl command

openssl s_client -tls1 -cipher ECDH -connect ipaddress

DNS is one of the major functional pillars upon which the internet is constructed. Its function of resolving human useful domain names to IP addresses has effectively been around since the beginning of internet time, where the internet was a peaceful place where only academics roamed….No surprises that security was not the major priority back then. I’ve intuitively known that DNS is “insecure”, but in the sense of my blog “If you think you’ve understood something – you probably haven’t” – my understanding of the technical mechanisms are superficial.

What brought me onto the subject of DNSSEC is a bit convoluted: I was investigating into the vulnerabilities of HTTPS with client authentication for a project of mine called SMT – Secure Messaging Transport. I remember reading a research paper from Vitaly Shmatikov about the flaws in SSL implementations – The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software where it was mentioned that clientAuthentication vulnerabilities would be looked into later. So what caught my interest was Vitaly’s 2010 paper called the hitchhikers guide to DNS cache poisoning.

As an engineer rather than hacker – its more interesting for me what the countermeasures are for attacks rather than exploiting vulnerabilities. The essential fact about the DNS cache poisoning is that DNSSEC would be essential to protect against this type of attack. I’m not sure if this is true for DNS hijacking which i haven’t had time to look into. Wikipedia claims that DNS forgery is still a problem.

In the future – DNSSEC has the potential to revolutionize the X.509 certification validation mechanisms. If for one’s own domains, the signature of valid host certificates were stored in trustworthy DNS then this would prevent some current vulnerabilities in HTTPS which were discovered by Moxie Marlinspike and demonstrated in his Blackhat 2009 slides

Recently Google announced that their public DNS resolver supports DNSSEC. This gives me hope that “we get it for free” if we wait for all ISP’s. But is it so simple? Google stated that:

“Effective deployment of DNSSEC requires action from both DNS resolvers and authoritative name servers. Resolvers, especially those of ISPs and other public resolvers, need to start validating DNS responses. Meanwhile, domain owners have to sign their domains.”

Its clear to me that domain owners will “sign their domains” if it means that clients connecting to their domains are better protected – there’s a natural motivation there. It’s not clear to me what a DNS client ( say a Java application ) needs to do to make sure it’s DNS queries are certified valid. This for me is the biggest question since in the end – it’s an application resolving the DNS domainname to IP address. What good is it to have an ISP’s DNS resolver sure of its domain information, but this never reaches a client. DNSSEC will not help solve any problems with physical security. If an intruder has physical access to the network – all bets are off. Arp Cache Poisoning and all the risks associated with this are still possible. This article shows how to use the Ettercap tool to do DnsSpoofing.

This will be one of my future missions – to find out the state of client support.

Often in IT discussions regardless of how formal or informal, with business persons or experts, there is often a fair amount of confusion about the concepts of address and identity.

An address is a topological identifier, a “position” in some space. In order for any two distributed parties to communicate requires a communication channel, with a party at each end. The channel will have an address at each endpoint.

  • a chat room name is an address in a chat provider’s collection of chat rooms.
  • a GPRS coordinate is an address in the coordinate system of GPRS.
  • a map coordinate is an address in the Geographic coordinate system.
  • a postal address or POBOX is an address in a national postal services
  • a telephone number is an address in the global space of telephone numbers.
  • an InternetProtocol address is an address in the Internet.
  • a domainname is an address in the DNS namespace, where the function of DNS is to map domainnames to InternetProtocol addresses.
  • an email address is the name of a mailbox to which emails can be sent, and received from.
  • a JMS queue/Topic is an address in the local namespace of a JMS ServiceProvider.

Eventhough a map coordinate has on first thought nothing to do with communications, it can still be used as the point at which communication can take place with over time. For example two people could exchange letters by going to a specific map coordinate each day but at different times. This can be represented as a communication channel where each endpoint has the same spacial Address, but a different time. In information theory, a communication channel’s endpoints can have both spacial and time dimensions.

An identity is a name associated with an “actor” or “agent” of a system which actively participates in some process. Identities need authentication at the service provider ( so that the service provider can trust the agent claiming the identity is who he says he is ). Once the identity is proven, the service provider can determine the authorizations of the identity – through some prior trust, directory lookup, SSO, or some federated mechanism like XACML.

  • Any username used in a login scheme is an Identity.
  • Any X.509 certificate claims an Identity of the certificate’s subject attribute.

In the “login” sense above, an email address is also an Identity when someone trying to send or receive email logs into the service provider providing the email address as username together with a password.

This overlapping usage can be a source of confusion, i.e. when an address is used as an identity in some use-case.

In one phrase: lack of interoperability.

Now the reasoning behind the phrase….

ProtocolBuffers is a language independent data format and a set of tools or libraries which Google released into the public domain a few years ago. The unique selling points are language independence and high efficiency / performance. ProtocolBuffers RPC is a remote procedure call client and server side “interface” specification which utilizes the protocol buffer messages and defines “Services”. The RPC api has some very interesting features which are arguably interesting for service providers – like call cancellation, and asynchronous callback on completion.

Now the ProtocolBuffers serialization is very performant –  much more so than say SOAP WebServices. Unfortunately Google did not release their own implementation of a RPC library into the public domain. This would not be a problem if they had defined a standard which would standardize interoperability of the RPC services between different implementations – between clients and servers made available by different parties with differing technologies and programming languages. The lack of a “wire format specification” in particular

  • means providers need to provide own libraries in all supported languages if they want to achieve a reasonably wide adoption. The current list of Official RPC Implementations shows that most libraries only support 1 language. A few RPC implementations support 2 languages, and the only library ICE which says it supports multiple ( all ) is not actually an Protobuf RPC implementation, but a library which was an existing multilanguage proprietary RPC library which was just extended to support Protobuf message payloads.
  • means software vendors cannot choose one library since they’re not compatible with other libraries. For example JBoss application server would not use Tomcat if it were not a standard web application container. 

Google could do things differently – example Thrift from Facebook. Thrift provides a reference implementation with each language binding. Having said that, I do appreciate why Google was not prepared to release their RPC implementation(s).  You need more than just wireformat to define interoperability. For example

  • a service naming concept – or “directory service” ( DNS ). Cloud service providers find themselves partitioning services – not all service providers are going to load balance behind one hostname. 
  • application layer identity management, authentication and authorization mechanisms
  • transport layer security and protection mechanisms.
  • logging mechanisms
  • accounting mechanisms
  • monitoring and administrative remote control of all the above.

I guess if Google were to release an implementation which did all the above, they would be giving up some of their very precious strategic operational advantages.

Transport Layer Security ( use of TLS in SSL or HTTPS etc ) is a clear prerequisite for B2B eCommerce irrespective of industry. Without TLS eCommerce partners would not be able to trust who they are doing business with. You seldom ( perhaps never ) get something for nothing, and there is quite a substantial overall cost incurred in having to achieve TLS. I realized just how much effort managing trusted material was when confronted with an expired certificate for my OpenSource project demonstration protobuf-rpc-pro. Furthermore in my job in enterprise application integration i’ve experienced many real problems associated with the (mis)management of trusted material. Here some thoughts about the cost for medium to large sized organizations.

  • key generation and certification requests. This business function should only be performed by a security specialist, making the function expensive. Even if the cost of the certificate which verifies a public key is zero ( which it isn’t except for self signed certs ), this business function is expensive due to it’s specialization. There is an enormous amount of security knowledge required to use the tools ( openssl, keytool etc ), produce the right keys ( is 1024 bits ok or 2048?) and request the right kind of certification etc. Outsourcing of this business function is not advisable – considering this would equate to putting an organizations most trusted material into untrusted ( or not sufficiently trusted ) hands.
  • protecting the secret key. Wherever a private key is created or stored, it leaves a digital fingerprint ( even after deleting on a hard drive etc). This key generation function should be centralized to limit the risk of unintentional key loss or poliferation, and imposing a policy of using keystore passwords to protect private keys. There is a cost to the organization in educating anyone who deals with secret key material, even if it is just to explain the internal security policies – like not emailing keys to the places that they’re needed.
  • lifecycle management of trusted material. If a HTTPS certificate for a domain that you own expires, clients which rightly expect a valid certificate on connecting to your domain servers will no longer work. This puts a responsibility on the organization owning the domain to manage it’s certificates. Clearly when the number of certificates are large, this becomes a constant operational cost. The cost of replacing the trusted material ( keys & certificates ) on the webservers in a timely manner is only marginal. To run a cluster of webservers, requires a high degree of automation just to manage “routine” security patching and upgrades ( which should happen more frequently than key changes). A much higher cost is associated with informing partners about certificate changes which are likely to break their client’s current trust schemes. Clients’ trust doesn’t necessarily break, but it can be that new certification authorities are used which older clients do not trust, or new key lengths are introduced where older clients cannot use. In this case, if the partner management is not well managed, unlucky clients will only realize that something has changed when nothing is working and someone’s loosing money or customers. Knowing when clients will be affected by certificate changes is again a specialist function. Turning the perspective around, anywhere where your organization is a B2B client, you will need to manage constant certificate changes initiated by partners, and proactively realize which partners are obliged to replace certificates due to pending expiry.

Unfortunately use of TLS is not optional – not only for B2B commerce – but also within the borders of any organization, where the privacy and confidentiality of data transferred needs guaranteeing, and different parts of the organization ( employees / systems ) are trusted to different extents.