Blog Archives

vSphere Web Client – compulsory inefficiency or welcome feature?

With the vSphere 5.5 release, there are a plethora of new features that come with the usual vSphere release[1]. In my opinion, the largest change is the death of the vSphere full-client, with management performed exclusively from the vSphere Web Client.

The web client?

The vSphere Web Client was released with vSphere 5.0, and is certainly a big change from what sysadmins are used to. It takes a new train of thought, and general thinking to manage your vSphere farm. At first, I did not like the Web Client, as literally every single thing has moved location, so finding things you used to be able to click on with your eyes closed, is not possible. But after using it for a few months, it certainly has grown on me.

Below are some tips or noted intricacies that will help you on your vSphere Web Client journey.

It’s entirely flash based

This has both pros, and cons. Pros? The Web Client is multi-platform. No longer will you have to hunt for that Windows workstation in your network, to install the vSphere client. No longer will you have to download 200+mb and install different versions of the client to manage different clusters. Chrome, Firefox and IE are all supported.

The cons? It’s entirely flash based. The debate about the security of flash is outside ths scope of this article. I guess at least it’s not Java. This also means if you access the Web Client over say, a RDP jump-host or other remote-desktop solution, it will run quite poorly.


This is my favourite feature of the Web Client.

Say you are going through configuring a new VM, deciding which VLAN/port-group you need to place your VM into. Like most sysadmins, you are extremely busy and cannot quite remember which one you need. Instead of cancelling out of the wizard, loosing all of your work, you can click on this nice double-arrow:


That will minimize what you are currently doing, to the WIP section:


Now you can continue on with the sub-task you need to do, and when you are finished, simply click on the task in the WIP window to be returned where you were before! I initially thought, “Heh. That’s neat, I guess”, but then started to use it frequently and now could not live without it.

The search function is handy…mostly

Generally, I find in-built search functions, for most software, basically useless. They either return nothing, or the complete opposite of what you were after (I’m looking at you, mediawiki!).

The search functionality in the Web Client is mostly better then that. It will display VMs, Virtual Networks (portgroups), hosts, clusters, datacenters and datastores. Most of what you will be working with on a daily basis.


Above: An IT admin finds success in using the search functionality.

What about vSwitches though? That’s one of the first things you will configure, and you cannot search for it:


Why they didn’t include that, who knows. But for general use, the search function will help you find what you are looking for, instead of blindly navigating around.

Managing a vSwitch took me a long time to find out. For reference, you have to select the host, manage, networking.


To navigate your way around the interface faster, you can also use keyboard shortcuts[4]! I find when working remotely, over a slow connection, these drastically increase your administrative speed:

  • Ctrl-alt-3: Hosts and Clusters
  • Ctrl-alt-4: VMs and Templates
  • Ctrl-alt-5: Datastores/Datastore clusters
  • Ctrl-alt-6: Networking

Those 4 items above are the 4 main ones I use on a regular basis. They greatly assist in zipping around the interface. Oddly enough, I was not able to find any official VMware documentation on these shortcuts.

SSO is a big, complex beast, but ready for enterprise integration…but not deployment

Whether you install a stand-alone vCenter appliance, or highly-available vCenter infrastructure, you now have to install the SSO component. How do you plan for your deployment, though? Check out this flowchart from Josh Odgers, VCDX #90[2]:


Phew. I’m glad that wasn’t to comple- Oh, wait. Generally, most deployments will only  consist of a single vCenter server, and single SSO component. However, if your business and/or technology requirements are different, thorough planning needs to take place.

For enterprise integration, I’ll commend VMware for taking other technologies into account. VMware SSO is compatible with OpenAM, ADFS and Shibboleth/SAML. However, it might not be ready for true enterprise level deployment, as you cannot use a clustered database solution for your database, like MS-SQL clustering or Oracle RAC[3]. You can cluster the SSO componenents themselves, but that only uses VMware HA (you will have downtime), or you can use a 3rd-party load-balancer (usually, spend money) to acheive fail-over.

The vSphere Web Client – Resistance is futile

Basically, you now have no choice but to get used to the Web Client. However, don’t look at this as a bad thing, look at this as an opportunity to re-learn what you know, with some icing on top. Initially I disliked the Web Client, but over time I have grown to like it, and now prefer it to the full-client.

At first, I was much more inefficient compared to using the full-client. But with general use, using the (mostly) useful search, using keyboard shortcuts and de-coupling myself from old habits, I can certainly work faster, and smarter, in the web client.

Before deployment, do your research. If you blindly run the installation wizards clicking “next-next-next”, you’re gonna have a bad time.

At the end of the day, the virtual-motherland is mandating use of the Web Client. If you close your eyes and think of a far-away place, things don’t hurt. Too much.


Follow-up on the biometrics post

Following up on the previous post, it’s worth noting that while biometrics can’t be managed by their owner, passwords can be, at least to some extent.

Here are some problems with passwords, together with some tips and tricks (that can be implemented by either you as an individual, or by an organisation as part of its information security policy) to address those problems.

  • People often choose weak passwords. Just do a Google search for “list of common passwords” and be amazed at how commonly weak passwords are used. What can you do? Use a simple mnemonic to think up really good passwords that you’ll never forget. Say for example you are setting up your Facebook account and your name is Wally Wombat. You can think to yourself, “my Wally Wombat Facebook password must be really good”. Take the first letter of each word in that phrase, and you have the password, 20130412_142649-small“mWWFpmbrg”. Prefix it with your favourite number, and you get something like, “45mWWFpmbrg”. You can spice the password up further with non alphanumeric characters, but even as it stands, that password is better than any on any of the lists that you can download from the net.
  • People write passwords down. And they stick them to the devices that the passwords are intended to protect. Yes, the picture on the right is real; the passwords authenticated the laptop owners both to the laptop and to the corporate VPN, and would have allowed remote access to anyone that found the laptop that the password was stuck to. Just don’t do it! If you have trouble remembering passwords, even with the above mnemonic trick, then use a “password safe”. This one is free and open source, but there are lots of others. Just make sure you choose a really good password for the safe since if you use one, all your eggs are in one basket! Oh, and do back it up too.
  • People share passwords or reuse them across many systems. As above; just don’t do it!
  • Passwords can be stolen and reused by an attacker. This is the greatest weakness of all passwords and the longer the password is in use, the longer the attacker has to misuse it. The best solution to this problem is to augment passwords with a second factor of authentication; something you have. The most common way of doing this is to use two-factor authentication (2FA) tokens. The main hassle with 2FA is that most 2FA schemes require centralised servers and setup may be too complex for the average user. With that in mind, we’ll cover off 2FA in another post.

So there you have it. The last point (2FA) might be too hard for most, but if you implement the first three points, you already present an attacker with a much more difficult target, putting you ahead of most people on the Internet. Remember, when outrunning a lion, you don’t have to be faster than the lion, you just have to be faster than someone else who is also outrunning the lion.

Consumer biometrics and unicorns

[Hint: Neither of them are real]

So Apple decided to include a fingerprint reader on it’s new phone. OK, from a marketing wow-factor perspective, why not? Biometrics have been popping up in consumer devices for ages; heck, my 5-year old Lenovo laptop even has a fingerprint reader! As long as users realise that fingerprint-based login schemes do not improve security for most commercial and personal scenarios, and are really only useful for attracting a potential geeky mate, then go for it!

But if you think that biometric (including fingerprints) authentication schemes are secure, then read on!

“Authentication” means proving your identity; it’s what most of us call “logging on”. Biometric authentication mechanisms allow you to log on based on some characteristic that is unique to you as a human. Example characteristics that are unique to you include your fingerprint, your voice, or the pattern of your retina.


A small serve of passwords

Of course, the most common authentication mechanism in existence today is the password. Passwords (and PINs) allow you to prove your identity using something that you know. The assumption is that since only you know your password, then only you can prove you are you, which of course you do every time you log onto your desktop or Facebook or whatever. The main problems with passwords are that they are hard to remember (more on that later), are often reused (more on that later too), and can be stolen and/or misused without you ever knowing… well, until you see your new Profile Picture that is!

It is possible to address passwords’ inherent weaknesses however. Using a password to authenticate yourself generally works as long as you follow these rules:

  1. Keep the password secret (don’t tell it to someone else, and don’t write it on a yellow sticky note that you keep with your laptop).
  2. Make the password hard to guess (see the “Follow-up” section, below).
  3. Change the password on some regular basis, or if the password has been stolen or misused.

If you follow those steps, then an attacker is very limited in what he/she can do. If you don’t share or reuse your password, then the attacker has to either guess it (unlikely if you have a strong password) or break into some system you use (possible, but there are probably easier targets). And even if the attacker is successful, you can take steps to prevent a repeat attack from succeeding, and lock out the attacker simply by changing your password.

Compare and contrast

Now let’s compare passwords to biometrics; we’ll focus just on fingerprint biometrics since they are the most commonly used for domestic/consumer biometric devices. Comparing your fingerprint to your password:

  1. It’s impossible to keep your biometric characteristic secret. Everything you touch leaves a copy of your fingerprint behind; that has been the case at least since you were born, and it will be the case at least until you die. Everyone and anyone can know all about your biometric characteristic.
  2. It is impossible to make your biometric characteristic hard to guess. It’s you! You’re there! Guessing is not something that is required or meaningful in this context!
  3. As we’ll see below, it’s easy to misuse one of the millions of copies of your biometric characteristic that you leave lying about throughout your life. And when an attacker does misuse (say) your thumbprint, what are you to do? Change it? Delete it? I guess it’s possible, but the consequences will be uncomfortable to say the least. Ah well, you’ve got nine more fingers to go!

Smarter people than I have shown for years that it’s easy to misuse a copy of your fingerprint. The most recent was a couple of days ago, when the most venerable and respected Chaos Computer Club showed how the new iPhone fingerprint reader could be fooled.

You can read all about it here.

But my favourite is the story of Tsutomu Matsumoto and his colleagues. gummyBearsWhile working at Yokahama National University back in 2002 (that’s 11 years ago!) Matsumoto showed that he could make copies of fingerprints using melted Gummy Bears. Using this highly advanced technique, Matsumoto could defeat (with varying reliability) all readers that he tested.

Here are the slides from his presentation.

To defeat the readers that were claimed to be able to recognise live fingers, he just licked his fake finger and best of all, if he were ever caught, he could happily eat the evidence!

Just stop it!

Just like unicorns, the lure of much information security gadgetry is enticing but ultimately a fantasy:

  1. You cannot manage your biometric characteristics like you can manage a password. If your fingerprint is copied or misused, you are in trouble. And let’s not even consider what happens if fingerprints are replaced by retina scans!
  2. The biometric readers could be fooled in 2002, and they can still be fooled in 2013. Problem being that once they have been fooled, you get to think about how to change your biometric quality… and then you’re in trouble… pain probably, but trouble certainly.
  3. You have no idea who can access the biometric data stored in your phone or laptop. (Well, you do have some idea, don’t you [1][2][3]). And should that biometric data be misused without your consent or knowledge… see the point above.

Biometrics might be a bit of fun, but biometrics are not useful or secure in a commercial/domestic/everyday environment.

The end.

[1] Nokia hijacking secure comms.
[2] iPhone apps accessing private data.
[3] Guardian report on US tech giants data


Recent headlines [1][2][3] about government-sponsored surveillance of Internet traffic/data has (rightfully) caused some introspection among the information security industry about how we should go about advising our customers (from a business as well as a personal perspective) in light of these events.

Leaving aside the prickly ethical/moral issues surrounding these events, what can we do to reduce the risk that government (or other non-state but powerful) entities can access our private data, whether it be at rest or in transit ?

Self defence

The ‘Surveillance Self Defence’ project established by the EFF ( provides excellent practical guidance on how to minimize your Internet footprint, making it harder for governments (or anyone else for that matter) to access your private data. While written specifically in the context of the US legal system, the advice is applicable to all individuals who value privacy but still want to interact in today’s on-line world. Common-sense advice such as ‘prefer phone communications’ (which typically are not copied/stored in contrast to email or SMS) and prefer POP to IMAP/webmail (POP’s default mode of operation downloads the mail locally and deletes it from the server [4]) abounds on this site and it’s well worth spending an hour reading through the advice and seeing if you can apply it to your daily on-line routine.

While the SSD site mentioned above does have some advice relating to the security of data ‘on the wire’, the section concerning encryption of web traffic can be summarised as ‘use HTTPS wherever possible’. While this is good advice, recent events have shown that the system of trust underlying HTTPS (or rather SSL/TLS) is coming under increasing pressure. Modern browsers are pre-configured to trust a huge number of CA certificates. Have a read through the slides [5] from the EFF SSL Observatory ( to get an idea of the current (poor) state of the system. The main problem is that any CA can issue a certificate for any domain. Thus a single compromised (or state-entity-cooperating) CA has the potential to undermine the security of all HTTPS sites on the Internet by issuing trusted certificates for arbitrary hostnames and permitting parties who have access to those keys/certificates to man-in-the-middle communications to those HTTPS hosts (assuming they have access to the relevant network path) [6].

It’s happened before

This is exactly the sad state of affairs which occurred in the infamous Diginotar ‘hack’. In summary, the Diginotar CA was compromised and over 500 certificates were issued for names such as *, * (see [7] for the full list).

The first indication that the issued certificates may have been used by state entities to intercept communications was when users in Iran started reporting that their Chrome browser was issuing warnings about the certificate when they were trying to access their Gmail accounts (read the section below on Certificate Pinning to get an idea of how Chrome was able to detect the ‘fraudulent’ certificates).

A similar event occurred [8] when the TurkTrust CA mistakenly issued sub-CA certificates instead of regular end-entity certificates for an entity which were subsequently used to issue more fraudulent * certificates.

So what’s to be done?

If a compromised or incompetent CA can potentially cause such global damage, what about the CAs who voluntarily (or are coerced by legal means) issue certificates for use by intercepting authorities? What can be done to detect these ‘fraudulent’ certificates, which are completely legitimate as far as your web browser is concerned? Well, there are 2 practical steps we can start taking right now to identify fraudulent certificates being used in man-in-the-middle attacks: the DANE TLSA protocol (RFC6698) and ‘Certificate Pinning’ or ‘Certificate Stapling’.

Aside: “DANE” stands for DNS-based Authentication of Named Entities. “TLSA” does not stand for anything; it is just the name of the DNS Resource Record type used for DANE. Anyhow…

The idea behind DANE (RFC6698) [9] is to use secure DNS to bind the certificate association between the end-entity certificate (or it’s issuer) and the DNS hostname. Specifically, during the TLS handshake, the client makes a DNS request for the TLSA record for the server hostname. The response will (securely) indicate one of 3 ways that the legitimate certificate can be validated:

  • By giving the certificate (or even just the public key), or hash thereof, of the ‘legitimate’ server certificate which is sent from server to client as part of the normal TLS handshake.
  • By giving the certificate (or hash thereof) of the ‘legitimate’ issuing CA which was used to sign the ‘legitimate’ server certificate.
  • By giving the certificate (or hash thereof) of a CA certificate which _must_ be used as a trust anchor when performing PKIX validation of the server’s certificate.

This TLSA record is hosted/signed by the domain owner. Providing that the DNS response is secured by DNSSEC, the client can be assured that the server certificate presented during the TLS handshake has been correctly identified by the domain owner, rather than relying only the browser’s built-in trust store.

While all this is great in theory, the practical roll-out of TLS clients supporting RFC6698 (published in Aug 2012) has been almost non-existent to date:

  • Firefox has native support pencilled into their security roadmap, but there is no active development on the feature at this stage [10].
  • Chrome has mentioned of forth-coming support for RFC6698 in GIT [11], but it’s not there yet.
  • Internet Explorer doesn’t support it, but Microsoft thinks the idea ‘is a great one!’ [12]
  • OpenSSL (the core library of many SSL-enabled applications) has support in a development branch which should make it’s way into the 1.0.2 release [13].
  • Postfix v2.11 supports DANE for SMTP client connections using TLS.

In other words, no stable releases of major web or email clients (browsers or libraries) support DANE TLSA yet. However, all is not lost! For Firefox at least, there is are 2  plugins [14][15] which provides support for certificate verification of HTTPS connections using DANE and these can be used in the interim until the major browsers have DANE supported natively.

Certificate Pinning
An alternative (albeit not scalable) alternative to using DNSSEC to securely advertise certificates or their fingerprints is to hard-code them into the client itself (!) and update them as browser updates are pushed out. This is precisely how Chrome was able to detect and prevent the Diginotar and TurkTrust ‘attacks’ – the certificate fingerprints for all of Google’s HTTPS ‘sites’ have been baked into Chrome since Chrome 13 [16]. If the fingerprints of any certificate purporting to be issued to a Google hostname do not match anything on the built-in whitelist, Chrome will display an error.

Great – Chrome will detect MITM attacks using fraudulently-issued certificates against Google sites, but what about the rest of us ? Can we get the sites we care about on that whitelist too ? Turns out you can [17]. Used in conjunction with HSTS headers on the server side(to prevent downgrade attacks), commandline options can be passed to Chrome to indicate which sites and certificate fingerprints to add to the whitelist, thus ensuring that fraudulent certificates can be detected if they are presented.

What about the other major browsers ?

  • Firefox has this feature on the roadmap [18], but no confirmed landing date.
  • Internet Explorer (or rather the underlying Windows client TLS libraries) can be configured to use certificate pinning via the Enhanced Mitigation Experience Toolkit (EMET) [19].

The certificate pinning feature is a great way to detect the use of fraudulent certificates when connecting to high-value HTTPS sites. The main downside is that the sites/thumbprints must be managed manually (the thumbprints will need to change every time the server certificate is re-issued (e.g. due to expiry)) so this technique is not as scalable as the DANE-based one.

To sum up

While authentication based on X.509 certificates is going strong inside the enterprise (with certificates issued by Enterprise CAs which are significantly more trustworthy than ‘public’ CAs), the situation out ‘on the Internet’ is more precarious due to trust issues which are becoming more apparent as time passes. The use of DANE TLSA (in combination with DNSSEC) will mitigate most of these issues by letting domain owners assert their own valid certificates (or issuing CAs) in addition to or instead of the public Certificate Authorities. Until DANE TLSA is more widely supported in browsers, the use of certificate pinning is a good interim measure to protect against CA misuse/abuse.


[1] A Guardian report

[2] ABC report

[3] Bruce Schneier’s blog

[4] Of course, the price to pay for this is that you have to maintain backups of your own mail rather than relying on your external mail provider, but most people probably have an established backup routine for all their other data in any case.

[5] SSLiverse

[6] The aforementioned slides state that Macao has it’s own CA certificate in the trust
store of Windows XP, but that SSL certificates issued by that CA were nowhere to be
seen on the Internet  despite extensive scanning. If it’s not being used to sign
certificates for legitimate sites, where is it being used and by whom ?

[7] Rogue certificates

[8] Enhancing digital certs

[9] RFC6698

[10] Mozilla firefox

[11] Chromium

[12] DANE

[13] OpenSSL ticket

[14] Firefox plugin #1

[15] Firefox plugin #2

[16] Chrome pinning

[17] Whitelisting with Chrome

[18] Cert pinning with Firefox

[19] EMET cert pinning