Archive

Software Engineering

My #1 motto currently is “Don’t do something until you understand what you’re doing“.

Software developers are often tasked with doing something they don’t understand. This is normal in the enterprise world, particularly where “segregation of duties” is fashionable. When requirements are hashed out between business people and requirement engineers, designs between requirement engineers and architects, and implementation concepts between those architects and product responsibles – the developer is mostly invited very late to the party. The developer is put under pressure to deliver by the entire enterprise software delivery apparatus and if the developer then starts asking questions, this invariably is “frowned” upon by the powers that be. This pressure psychology automatically stops developers asking necessary questions and a culture of “just do it” takes over. The results are unsatisfactory for everyone – the delivery quality suffers and developers do not learn and grow as much as they could to satisfy the needs of the larger corporation.

If everyone tried to work by this motto, and let work by this motto, then I’m sure IT projects would be more successful.

Personally i make every effort to really understand what i’m doing, and acting as scrum master or in roles where work is prepared for others, i put making things understandable for others at the top of my agenda.

Advertisements

Setting up a HTTPS web server is not a trivial undertaking. The choice of cipher suites which the server can be configured to support allows for a great number of choices. I have come across “best practices” guides on the internet which a practitioner can use to decide which SSL ciphers to allow. There are also several possibilities to test a HTTPS webserver for vulnerabilities and “compliance” to best practices. Eventhough i have been working with these SSL concepts for many years – i realized that i did not know the following:

Is the choice of SSL cipher suite ( chosen during the SSL handshake ) affected / influenced by the type of key used for the server certificate?

There are typically several possibilities to choose for the public/private keypair used when requesting a SSL server certificate from a commercial CA. Generally, either an RSA or a DSA key is used, but other types exist – like ECDSA. Does the TLS cipher suite negotiated in the SSL handshake limited by the type of the server’s keypair?

In a nutshell, the answer is YES, but the reasons are fairly complex. The TLS handshake starts with a client indicating with the HelloClient record, a list of ciphers which it supports to the server. This takes place even before the client knows what kind of certificate the server has and what ciphers the server is going to offer. The server replies with the HelloServer record, indicating the “chosen” cipher suite. If the client and server don’t have some overlapping set of ciphers the handshake will not complete successfully. Taking the description of the SSL handshake for the AES128-SHA cipher suite from this article on how to setup SSL with Perfect forward secrecy, it states that the pre-master secret is communicated from the client to the server by encrypting it with the public key of the server ( so that only the server with the private key can decrypt ). This functioning is true for RSA public/private keypairs which support encryption. Taking a closer look at the suite using the openssl ciphers -v gives more insight into how the cipher suite is “composed”.


$ openssl ciphers -v cipherlist AES256-SHA:AES128-SHA
AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1
AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1

The “Kx=RSA” indicates that RSA is the key exchange mechanism and “Au=RSA” indicates that RSA is the authentication mechanism. It is the use of RSA as the key exchange mechanism which limits the cipher suite’s use to servers with SSL certificate containing RSA public key. The cipher suites based on RSA key exchange can be listed with openssl ciphers -v cipherlist kRSA.

So what is the effect of server key choice on PFS cipher suites, like ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:EDH-DSS-DES-CBC3-SHA. From

$ openssl ciphers -v cipherlist 'kEDH:!ADH'
DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256
DHE-DSS-AES256-SHA256 TLSv1.2 Kx=DH Au=DSS Enc=AES(256) Mac=SHA256
...
$ openssl ciphers -v | grep EDH
EDH-RSA-DES-CBC3-SHA SSLv3 Kx=DH Au=RSA Enc=3DES(168) Mac=SHA1
EDH-DSS-DES-CBC3-SHA SSLv3 Kx=DH Au=DSS Enc=3DES(168) Mac=SHA1
...

it would seem that most EDH and DHE suites work with both RSA and DSS server keys.
and

$ openssl ciphers -v cipherlist ECDH | grep ECDHE
ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD
ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD
...

the EC cipher suites work with RSA and ECDSA server keys.

It would seem from the overall support of different cipher suites – RSA server keys are most versatile. RSA server keys work with RSA, ECDH, and DH key exchange mechanisms, where DSS keys are lacking the support of EC in the openssl 1.0 currently.

Now although the server key type does influence the cipher suites which can be used during TLS handshake, the Client Authentication is “orthogonal” to the establishment of a secure session. The client authentication just relies on the client signing some data exchanged in the handshake with the client’s private key and the server checking the signature against the client’s public key. This means that HTTPS clientAuthentication can use any type of client key, irrespective of the server’s key type ( as long as the server can check the client’s signatures ).

Don’t forget you can test a running server’s TLS handshake anytime with the openssl command

openssl s_client -tls1 -cipher ECDH -connect ipaddress

I’ve just managed to successfully implement a java technology proof of concept for the SMT encryption scheme, ecies384-aes256 and ecies384+rsa-aes256

The concrete schemes are defined as:

encryption( M, PF, (K-A,K-a), K-B, A-B ) -> E, L
{
validate A-B must be a 384bit X.509 encoded EC sessionKey.
EC key generate (A-A,A-a), an EC keypair on secp384r1
ECDH key agreement (A-a,A-B) => shared secret S
PFS := SHA256(PF) – convert the PF into a shared secret.
SKe || IVe:= SHA384(S||PFS),
where SKe is a 256bit AES encryption key,
IVe is a 128bit initialization vector for the AES encryption
E := AES256/CTR(SKe,IVe,ZLib(M||Sign(K-a,M||byte-len(M))))
L := byte-len(M) || A-A
where byte-len(M) is the length of M in bytes represented as 8-byte fixed length big-endian integer.
A-A is a X.509 encoded EC public key – aka the sender’s messageKey
}
The encrypted data E is the compressed message and signature symmetrically encrypted with the derived secret key. The encryption context L is the concatenation of the message plaintext length with the unique EC public key of the originator. The length of the plaintext message is known outside the encrypted data so that suitable buffer space can be made available at decryption time and bound the decompression.

decryption( PF, (K-B, K-b), (A-B, A-b), K-A, E, L ) -> M
{
E := AES256/CTR(SKe,IVe,ZLib(M||Sign(K-a,M)))
L := byte-len(M) || A-A
where byte-len(M) is the length of M in bytes represented as 8-byte fixed length big-endian integer.
A-A is a X.509 encoded EC public key

PFS := SHA256(PF) – convert the PF into a shared secret.
ECDH key agreement (A-b,A-A) => shared secret S
SKe || IVe:= SHA384(S||PFS),

M || Sign(K-a,M) := GUNZIP(byte-len(M), AES256/CTR(SKe,IVe,E))
where decompression fails if invalid stream or if decompressed length > byte-len(M) or stream ends before byte-len(M) bytes are decompressed.
verify(K-A, M, Sign(K-a,M)) and fail if signature incorrect.
}

The ecies384+rsa-aes256 is a minor variant on ecies384-aes256 where

L := RSAEncrypt( K-B, byte-len(M) || A-A )

The encryption-context is encrypted with the destination user’s RSA public 2048bit key. This scheme “hides” the senders messageKey, by encrypting it with K-B. Only the destination can decrypt the messageKey and message length the destination’s private key K-b.

A lot of my time was spent implementing a temporary file backed input and output streams, so that the indefinitely long data could be written and read with encryption/decryption happening on the fly. I will eventually release the code under some open license – but first i want to get some kind of feedback from the crypto community about the value of the scheme itself. I also want to implement a cascading of AES with some other block/stream cipher. This is discussed by Zooko at the The Least-Authority File System blog.

Several large influential IT technology companies, ( Oracle and Redhat are the ones I know ) are trying to shape the future of Enterprise Application Development – by trending towards Service Component Architectures. The SCA concept is that of defining services which can consume and be consumed by other services. All the current standardization efforts and new shiny products coming out cannot quite convince me yet to drop how i define and design applications today and start building with SCA.

A JEE application today is some code which is structured as a set of interconnected services, which use datasources and adapters ( for other services ) and can expose it’s own domain model as a service to other calling it. The “application” defines the boundaries of the service, and becomes a deployable unit for some runtime container. This is “service orientation”. The internal services of an application can be re-used between applications if these are packaged separately in re-usable libraries. The application defines a “composite” service which is more likely at a “business” level of granularity – whereas the internal services are lower level.

Now the problem with SCA. If you start defining everything as a service, then the boundaries between applications disappear. Not only that, but it is difficult to separate out the “low level” (internal) services from higher level business services. After X years of developing services – i can imaging that there will be an incredible spaghetti of service dependencies. Maintenance will be every bit as difficult as the classic JEE application – but the classic application will have more flexibility in migrating it’s infrastructure – like moving it into the cloud.

Don’t get me wrong. I do think there is a business case for SCA products. When a company is relying to a large degree on off the shelf cloud services, then it has no option but to integrate “around” these applications to get them to work together. For that, you need process capabilities and the classic “service bus” data movers and transformers etc. So SCA gets to be the integration playground where things get done where no-one else wants to do or can do. The SCA gives you this entire capability as a single application ( which needs deployment / hosting / support ) – which is probably the end goal vision of the tech giants promoting this.

SCA could also be effective as a virtualization layer between a Web/Portal domain and your business application domain.

When a process is modeled in IT systems, it is normally designed with a large set of assumptions about the real world. The assumptions are just that – assumptions and sometimes wrong or incomplete. This leads to processes which do not “support” the full complexity of the real world. As a consequence, operational personal can spend a significant effort on “troubleshooting” the IT systems which are impacted by the process.

For instance – an order fulfillment process does not support the order to be cancelled after a certain point in the process, where in reality a customer can still exceptionally cancel his order through contacting the support staff and escalation. The IT system process will run to completion – requiring cleanup of the cancellation afterwards.

I call this the “Process Reality Disconnection”. It can make an organization incur significant running costs. It is worth specifying and implementing compensating / exceptional processes in IT systems with well defined pre and post conditions of each process outcome, because real world processes are always more complicated than initially assumed.