Archive

Opinion

My #1 motto currently is “Don’t do something until you understand what you’re doing“.

Software developers are often tasked with doing something they don’t understand. This is normal in the enterprise world, particularly where “segregation of duties” is fashionable. When requirements are hashed out between business people and requirement engineers, designs between requirement engineers and architects, and implementation concepts between those architects and product responsibles – the developer is mostly invited very late to the party. The developer is put under pressure to deliver by the entire enterprise software delivery apparatus and if the developer then starts asking questions, this invariably is “frowned” upon by the powers that be. This pressure psychology automatically stops developers asking necessary questions and a culture of “just do it” takes over. The results are unsatisfactory for everyone – the delivery quality suffers and developers do not learn and grow as much as they could to satisfy the needs of the larger corporation.

If everyone tried to work by this motto, and let work by this motto, then I’m sure IT projects would be more successful.

Personally i make every effort to really understand what i’m doing, and acting as scrum master or in roles where work is prepared for others, i put making things understandable for others at the top of my agenda.

Advertisements

When we (as in humans) rip open a browser window and enter a “URL” into the address bar and a page from the address is rendered (without any security warnings) what are we actually trusting?

We are trusting that the browser will assure us that the page actually came from a domain’s webservers ( and not anyone elses webserver ). The web servers SSL certificate is a statement signed by a “trusted” Root or intermediate CA that the webserver is authorized to state that it can be called by the domain name. This is known as a PKIX trust. CA’s have to go to a lot of length to prove that the person / legal entity which requests a SSL certificate for the domain really “owns” the domain and may request a certificate in the first place. CAs can spend more or less effort proving that a domain is really owned by the requestor. Mechanisms like ExtendedValidation goes in the direction of more thoroughly identifying the organization and checking that this really “owns” the domain. The “who” owns a domain is normally not necessary to know. ExtendedValidations are just providing more assurances that the CAs don’t make mistakes. Making mistakes shouldn’t be a huge problem for “leaf” certificates – but the effort going into ExtendedValidation can only mean that 1) it is lucrative for the CAs and 2) we don’t trust that revocation works ( a legitimate concern ) .

When i surf on facebook.com – the name of the domain itself is what i’m conversing with, so knowing more about the company and where it is located geographically is mostly irrelevant. When the page from facebook.com reaches the browser, together with the SSL server certificate, the browser checks that the domain name entered ( and DNS lead to the server ) is matched by the server’s SSL certificate, and that a reputable CA issued the SSL certificate. The PKIX mechanism protects us from DNS poisoning leading us to a bogus server which presents a legitimate certificate, because legitimate certificates should only be issued to legitimate owners of the domain.

To cut a long story short – if DNS could be trusted absolutely – then PKIX trust would be almost irrelevant for SSL web servers, in fact would we even need SSL server certificates? If you could bind a public key directly into a DNS record for the domain. I don’t know DNSSEC if it goes this far. Obviously there are use cases where I want to know about the company behind the domain name – and that information is carried by the PKIX certificates of the domain, but other services like WHOIS could be revamped to fill this gap.

Maybe eventually PKIX will then mutate to a system where nations/government authorities really use it to bind public keys to natural persons/organizations/legal entities in a more meaningful way than it is done today. This would require making the X500 directories themselves more transparent and binding meaningful national/international identifiers into it. Still – I believe that in the long term, the SSL certificate monopoly WebTrust, all the root trust programs of all operating system manufacturers, and most current public CA’s will dissapear and be replaced by state run CAs.

Last night, i came across a discussion thread on cryptography@randombit.net
about how Email is unsecurable. This spiked my attention because in designing TDMX I came to the same conclusion. Some people propose a replacement for email, others want to “stitch” up the holes with incremental improvements. I tend to agree with the side which wants to make incremental fixes to standard email – taking the standard route through IETF.

Designing TDMX from the ground up as a secure messaging system is not contradictory to my feeling that email should be patched further. My point is that email is being used in many corporations where it is clearly better to not use email at all – when applications want to communicate with applications. For consumer to consumer or business to consumer communication, email security needs to be improved, but true end2end security is not even wanted ( See my previous blog entry ).

My next post will be an idea mentioned here about creating a email address to PGP key resolution service which could increase the usability of secure email. Who knows, maybe someone from IETF can take this up as a new draft!

in short – because corporations cannot TRUST their employees (enough).

If an email is truly confidential, then only the recipient is able to decrypt the message and read the contents. If the message could be decrypted by anyone else than the recipient, then the message is not truly confidential. Corporations have legitimate security reasons to want to “inspect” the contents of messages, both of those being sent ( authorization ) and those being received (protection). Considering just the sending of messages, corporations will naturally want a mechanism to inspect message contents either before effectively leaving the company premises or in less security conscious environments, being able to analyze the sent contents after sending. Without this possibility, the corporation would have to trust it’s employees absolutely to not abuse the communication possibilities. For example an employee sending company secrets to a legitimate business partner, let alone an email of wikileaks.

In the consumer world, where individuals are working on their own behalves, it is possible that End2End encrypted email systems will become more popular as usability improves and awareness of the insecurity of traditional email increases. I don’t believe Email will suffer extinction because there is no standard alternative for business to consumer communication. This process is very slow, secure email has been around for a long time like ZSentry even if the Lavabit / Darkmail is successfully riding a hype wave thanks to Snowden.

In the corporate world, where employees are working on behalf of some corporation, internal email messaging need not be truly confidential – since the communication takes place within the corporation’s trust boundaries. For messaging from inside to outside corporate boundaries, corporations are unlikely to trust their employees enough to embrace truly confidential email. There may be hybrid “gateway” like products which try to satisfy corporations security requirements – but this will not be truly end2end encryption, the one end of the encryption will still be under the control of the corporation and not in the hands of the employee.

Several large influential IT technology companies, ( Oracle and Redhat are the ones I know ) are trying to shape the future of Enterprise Application Development – by trending towards Service Component Architectures. The SCA concept is that of defining services which can consume and be consumed by other services. All the current standardization efforts and new shiny products coming out cannot quite convince me yet to drop how i define and design applications today and start building with SCA.

A JEE application today is some code which is structured as a set of interconnected services, which use datasources and adapters ( for other services ) and can expose it’s own domain model as a service to other calling it. The “application” defines the boundaries of the service, and becomes a deployable unit for some runtime container. This is “service orientation”. The internal services of an application can be re-used between applications if these are packaged separately in re-usable libraries. The application defines a “composite” service which is more likely at a “business” level of granularity – whereas the internal services are lower level.

Now the problem with SCA. If you start defining everything as a service, then the boundaries between applications disappear. Not only that, but it is difficult to separate out the “low level” (internal) services from higher level business services. After X years of developing services – i can imaging that there will be an incredible spaghetti of service dependencies. Maintenance will be every bit as difficult as the classic JEE application – but the classic application will have more flexibility in migrating it’s infrastructure – like moving it into the cloud.

Don’t get me wrong. I do think there is a business case for SCA products. When a company is relying to a large degree on off the shelf cloud services, then it has no option but to integrate “around” these applications to get them to work together. For that, you need process capabilities and the classic “service bus” data movers and transformers etc. So SCA gets to be the integration playground where things get done where no-one else wants to do or can do. The SCA gives you this entire capability as a single application ( which needs deployment / hosting / support ) – which is probably the end goal vision of the tech giants promoting this.

SCA could also be effective as a virtualization layer between a Web/Portal domain and your business application domain.

Recently i came into contact with a vendor of what can best be considered a “Public Dataset” where the company does some enrichment of this data – and this is their value proposition. The vendor’s sales person / account manager was using the term “Universe” to describe what my employers, as clients, had purchased. Being not really in either datawarehousing or business intelligence parts of the company – i just assumed that this term is somehow standard. At least everyone involved in this discussion did not bat an eye.

Today however, for some unknown reason – i just thought – gee, it’s rather presumptuous to call a dataset a “Universe”. It makes the term “Cloud” seem rather puny.

Interresting enough is that the Wikipedia article on datawarehousing doesn’t mention the term “Universe” once. It seems that SAP’s BusinessObjects calls a metadata construct which helps a reporting client map SQL queries onto to a set of database relational tables, Data Warehouse or Data Mart. So from what I can tell, it’s just SAP’s delusions of grandeur which have brought us this great IT term.