Sunday, 1 March 2015

PCI DSS: outsourced eCommerce

A presence on the internet is considered essential for business; the UK government have a digital inclusion policy to get SME's online and being part of the digital economy. However for many small companies go online and taking payments for services online is new and uncharted territory.

Many companies don't appreciate the governance around trading within the digital economy with issues such as the Payment Card Industry Data Security Standard (PCI DSS) and distance trading regulations part of wide range of regulations, standards and requirements that a company must get to grips with.


The PCI DSS was initiated by the Payment brands (VISA, Mastercard, American Express, Discover and JCB) to combine their individual security requirements into a single set of requirements. The standard is developed by the PCI Security Standard Council (SSC). It contains mandatory requirements for the storing, processing or transmission of cardholder data and includes anything that might affect the storing, processing or transmission of cardholder data. Merchants who receive payments from payment card from the 5 brands are responsible for ensuring the payment collection process is compliant to the standard. Merchants cannot delegate the accountability, even if all payment process is done by 3rd parties the Merchant still is subject to requirements of ensuring the 3rd parties are compliant,

One of the pit falls I came across when advising companies about the PCI DSS occurs when they have already got an eCommerce presence online before attempting to gain PCI DSS certification and their existing eCommerce operation is not compliant with the requirements of the standard.

For example, they have a website designed, hosted and managed by 3rd parties to card payment online rather than do it themselves; this is a good option form many companies as they may not have the expertise. However they find that instead of it being an easy process, it has become very difficult due to the use of suppliers that are not compliant to the PCI DSS requirements.

Outsourced eCommerce Compliance

For this type of situation of outsourced eCommerce; for companies not meeting the level 1 merchant status; there is a cutdown version of the questionnaire know as Self-Assessment Questionnaire SAQ A "Card-not-present Merchants, All Cardholder Data Functions Fully Outsourced". It was been developed by the SSC to address requirements applicable to merchants whose cardholder data functions are completely outsourced to validated third parties, where the merchant retains only paper reports or receipts with cardholder data.

The eligibility criteria for completing a SAQ A is given with the document; however the critical point is cardholder data functions are completely outsourced to validated third parties. Validated parties means the service providers must be PCI DSS Compliant for the services they deliver and this includes the following services.

  • Website Design
  • Physical Hosting
  • Managed Hosting
  • Payment processing

There is a distinction between being certified for being a merchant and being certified for services offered. A service provider will have an Attestation of Compliance (AoC) for either a RoC or a SAQ D for service providers where the AoC will state the services being covered.

Some companies get caught out because their service provider is certified as a Merchant for taking payment and may not have the service being offered covered by the certification.

For example; you could pay for the creation and hosting of a website from a website design company that take payment by credit card. They may have outsourced their eCommerce operation and completed a SAQ A themselves. When asked for evidence of compliance, they may offer the SAQ A as proof of certification, but this only covers their merchant activity and not their software development and hosting services. They should have an SAQ D for service providers to prove their services are compliant and present the AoC for this when requested.

The Problem

In my experience companies get caught out by having a website designed and hosted and then find they have to be compliant to the PCI DSS when their acquiring bank asks for a SAQ to be completed. At this point they find out that their suppliers are not PCI DSS compliant for the services contracted and also they don't have sufficient information to complete a SAQ D; which is the self-assessed version of the full set of requirements, as they don't have control over the hosting or management of the website.

This leaves them in the situation where they have been asked by their acquiring bank to demonstrate compliance and they are unable to meet the request.

The options are

  • Ask the suppliers to become compliant
  • Audit the suppliers as part of the companies compliance
  • Change to a certified supplier

None of these options are attractive or easy to complete. Whilst a company is non-compliant they could be fined by the acquiring bank monthly, pay additional transaction costs or in extreme cases have the ability to process payment cards removed.


My advice for companies thinking about starting an eCommerce operation is to contact an expert in the PCI DSS and get advice on the standard before actually implementing the website. This can save a lot of hassle, time and money in the long term,

There should also be more effort by governments, acquiring banks, payment brands and payment processors to makes sure those new to online payments can get the right advice.

Saturday, 28 February 2015

Security Threats

Some of the security threats faced by businesses are shown on diagram below that I drew a number of years ago. Although still relevant I'm aiming to update it.

How many do you face on a daily basis,

Thursday, 26 February 2015

What is SSL

SSL has been in the news over the last year for having a number of high profile vulnerabilities, but outside of the world of the encryption specialist the understanding of what it does is limited. This is fine as security tools such as SSL & TLS are suppose to be transparent to the end user.

What is SSL

The Secure Socket Layer (SSL) was a proprietary security protocol developed by Netscape, only versions 2 and 3 have been widely used. Version 2.0 was released in February 1995 and version 3.0, released in 1996. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101

The protocol is based on a socket which is the address of a service and consist of the internet protocol address, the port number and protocol being used.

Hence the following are two distinct sockets

  • UPD
  • TCP

As the SSL suggest this is a form of security based around the socket on a server. Typically we expect to see SSL on secure website indicated by HTTPS in the URL (address) bar and the padlock.

Secure HTTP typically uses socket TCP 443, whilst plaintext HTTP will use socket TCP 80.

A key component from a user's prospective is the digital certificate associated with the SSL connection.

To create an SSL certificate for a web server requires a number of steps to be completed, a simplified

  1. Generation of identity information including the private/public keys
  2. Creation of a Certificate Signing Request (CSR) which includes only the public key
  3. The CSR is sent to a CA who validates the identity
  4. CA generates a SSL certificate that is then installed on the server

The process of proving the identity of a server is cover by my blog on PKI.
 A server will require to have  a number of cipher suits installed which allow it to negotiate to use a common cipher that is available on both the client and server. A cipher suite consists of a number of parts  
  • a key exchange algorithm
  • a bulk encryption algorithm
  • a message authentication code (MAC) algorithm
  • a pseudorandom function (PRF)

 Examples of some cipher suites that might be found on a server are  
  • SSL_RC4_128_WITH_MD5
For a security service to be considered secure it should support only strong cryptographic algorithms such as those defined in the PCI DSS glossary v3
  • AES (128 bits and higher)
  • TDES (minimum triple-length keys)
  • RSA (2048 bits and higher)
  • ECC (160 bits and higher)
  • ElGamal (2048 bits and higher)

For additional on cryptographic key strengths and algorithms. information see NIST Special Publication 800-57 Part 1 (

SSL Handshake

The steps involved in the SSL handshake are as follows
  1. The client sends the server the client's SSL version number, cipher settings, session-specific data, and other information that the server needs to communicate with the client using SSL.
  2. The server sends the client the server's SSL version number, cipher settings, session-specific data, and other information that the client needs to communicate with the server over SSL. The server also sends its own certificate, and if the client is requesting a server resource that requires client authentication, the server requests the client's certificate.
  3. The client uses the information sent by the server to authenticate the server. If the server cannot be authenticated, the user is warned of the problem and informed that an encrypted and authenticated connection cannot be established.
  4. Using all data generated in the handshake thus far, the client creates the pre-master secret for the session, encrypts it with the server's public key (obtained from the server's certificate, sent in step 2), and then sends the encrypted pre-master secret to the server.
  5. Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity.
  6. The client sends a message to the server informing it that future messages from the client will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the client portion of the handshake is finished.
  7. The server sends a message to the client informing it that future messages from the server will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the server portion of the handshake is finished.
  8. The SSL handshake is now complete and the session begins. The client and the server use the session keys to encrypt and decrypt the data they send to each other and to validate its integrity.
  9. This is the normal operation condition of the secure channel. At any time, due to internal or external stimulus (either automation or user intervention), either side may renegotiate the connection, in which case, the process repeats itself.

Attacks on SSL

The SSL 3.0 cipher suites have a weaker key derivation process; half of the master key that is established is fully dependent on the MD5 hash function, which is not resistant to collisions and is, therefore, not considered secure. In October 2014, the vulnerability in the design of SSL 3.0 has been reported, which makes CBC mode of operation with SSL 3.0 vulnerable to the padding attack

Renegotiation attack

A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. It allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server

Version rollback attacks

an attacker may succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite strength, to use either a weaker symmetric encryption algorithm or a weaker key exchange. In 2012 it was demonstrated that some extensions to the orginal protoclas are at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data

POODLE attack

On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes cipher block chaining (CBC) mode of operation with SSL 3.0 vulnerable to the padding attack (CVE-2014-3566). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption).

BERserk attack

On September 29, 2014 a variant of Daniel Bleichenbacher’s PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security: Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a MITM attack by forging a public key signature.

Heartbleed Bug

The Heartbleed bug is a serious vulnerability in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the data payloads. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

Friday, 20 February 2015

Trust within the PKI

The Public Key Infrastructure is the backbone of eCommerce on the Internet, however it is something that most users don't understand.

Starting from a general user prospective: when doing anything that requires confidentiality of information they are told to look for the padlock in the browser and check for https at the start of the URL. There are told to trust the padlock.

What the padlock means is two fold:

  • The web server has had its identity authenticated
  • The communication with the server is protected

Here I want to discuss what does it mean when we are told the web server has been authenticated. Well authentication is:-
  1. Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be.
With a web server its identity will be authenticated by the issuer of the digital certificate that has been installed on the server.

The owner (admin) of the web server has gone to a certification authority (CA) and proved the identity of the server to the satisfaction of the CA. The CA who can who can issue digital certificates once they are happy with the identity will issue a digital certificate with the name of the server on it, the public key of the server and a signature of the CA. The signature is a block of text that has been encrypted with the private key of the issuing CA.

In English, we trust the identity of the server based on a 3rd party verifying the identity of the server to an indeterminable standard. The 3rd party (the issuing CA) is an organisation we likely know nothing about has used a process to determine the identity that we know nothing about. The issuing CA could of done a simple check such as the requester responded to an email sent to an address on the application or it could be a detail check of identity such as papers of incorporation of an organisation, financial background etc.

Basically we become very trusting because someone who we don't know has said the web server is trustworthy. This seems very bad on the facing of it, however with the PKI we actually have a method of getting some assurances; although it does rely on trusting someone at some point.

With the PKI, there are root CA's organisations that through checks based on the IETF and ITU recommendations on PKI infrastructure have proved their identity to operating system and browser vendors and consequently have had their public key included in the operating system and browser. Each root CA then performs check on child CA's to the recommendations to verify their identity. This continues with each CA in the chain verify the next link until the CA that issued a certificate to the web server is verified by its parent CA creating a tree hierarchy. Thus the chain of trust is built up.

Back to the certificates: a public key on a certificate can allow a message encoded with the public key's corresponding private key to be read. This way the public key of the root CA certificates in the browser can be used to read the signature on their child CA issued certificates. This way each certificate issued by the chain can be verified and authenticated, meaning the certificate issued to the web server can be verified and the identity of the server can be authenticated.

PKI Trust chain

So providing the trust model is upheld with no certificates issued without the identity of the receiving entity being checked thoroughly, we can rely on the authentication of the entity. There lies the problem, we need robustness in the PKI to prevent fraudulent certifications issued or CA's being hacked and certifications issued without proper authorisation.

Thursday, 19 February 2015

Internet connect toys: what is the risk?

It has been announced that Mattel is partnering with US start-up ToyTalk to develop Hello Barbie ( , which will have two-way conversations with children using speech-recognition platform connected across the Internet.

Initial thoughts were; is this hackable, and could it be used to abuse or groom children. Embedded platforms are often limited in the ability to deploy strong countermeasures such as encryption due to limited processing power; despite modern processors becoming more powerful, and they are difficult to update if a vulnerability is discovered.

A similar toy Vivid Toy’s ‘Cayla’ Talking Doll has been found to be vulnerable to hacking by security researcher Ken Munro, from Pen Test Partners (

The risk to abuse children through insecure internet connect devices was demonstrated by baby monitor being hack and abuse been shouted at a baby ( The device from Foscam has been patched with the update available, but a search of Shodan shows device still susceptible after the patch was made available as deployed devices where not updated.

The toy will listen to the child's conversation and adapt to it over time according to the manufacturer - so, for instance, if a child mentions that they like to dance, the doll may refer to this in a future chat. The device relies on a WiFi connection and the speech is likely to be processed by a remote facility. Listening devices have already proved to be controversial with privacy as demonstrated by Samsung ( The doll is not likely to have a sophisticated control panel so will the connection to WiFi be through WiFi Protected Setup (WPS), this has already been the subject of attacks (

Could the toy be used for abuse, I think the answer is that someone will learn how to attack the toy, whether it will be used in real life is another matter but the impact if it was is such that I hope security has been designed in from the start.

What could be done?

Mattel have not given many details on the security measures deployed to protect the toy and information recorded. It could be considered that the information is PII and therefore subject to the data protection laws.

The question is should there be a mandatory level of protection that manufacturers will have to meet to protect internet connected devices if they are to be used by children or potential pick up sensitive information. Should a kitemark scheme be introduced to show the manufacturer has meet the minimum levels of due care and due diligence?

Wednesday, 18 February 2015

Penetration testing as a tool

When protecting your organisation's assets penetration testing by an ethical hacker is an useful tool in the information security team arsenal.  Penetration testing is used to test the organisation security countermeasures that have been deployed to protect the infrastructure; both physical and digital, the employees and intellectual property. Organisations need to understand the limitations of penetration testing and how to interpret the results in order to benefit from the testing.

It may be used both pro-actively to determine attack surfaces and the susceptibility of the organisation to attack, as well as reactively to determine how wide spread a vulnerability is within the organisation or if re-mediation has been implemented correctly.

Penetration testing is a moment in time test, it indicates the potential known vulnerabilities in the system at the time of testing.  A test that returns no vulnerabilities in the target system does not necessarily mean the system is secure. An unknown vulnerability could exist in the system that tools are not aware of, or the tool itself may not be capable of detecting. This can lead to the organisation having a false sense of security. I refer you to the case of heartbleed, which existed in OpenSSL for two years before being discovered and may of been a zero day prior to the public annoucement.

A penetration test carried by a good ethical tester will include the usage of a variety of tools in both automatic and manual testing modes driven by a methodologies that ensures attack vectors are not overlooked, mixed with knowledge and expertise of the tester. A tool is only as good as the craftsman wielding it.

The term ethical hacker or tester means that they will conduct authorised tests within the agreed scope to the highest levels of ethical behaviour; they will not use information obtained for the own purposes, financial gain or exceed the agreed limits.

An organisation needs to plan careful it's use of penetration testing in order to maximise the benefits.

It can be part of Information Security Management System and will typically be used in the following areas:-

  • Risk management: Determining vulnerabilities within the organisation and the attack surface area
  • Vulnerability management: Detecting vulnerabilities presence the organisation, determining the effectiveness of re-mediation
  • Assurance audit: Testing implemented countermeasures
  • Regulatory compliance: Part of auditing to determine controls are implemented

The testing strategy has to be developed to meet the organisations requirements which should be driven by its mission objectives and risk appetite. It needs to be cost effective in that the testing delivers results by giving some assurance on the attack surface area and that the vulnerability management programme is being effective and controls are working. The frequency of testing will depend on factors such as how dynamic the organisation is; for example is the footprint of the organisation evolving the whole time, are their frequent changes to infrastructure and applications or is more less constant with no changes. It will also depend on regulatory and standard compliance activities, the PCI DSS specifies at least quarterly internal and external vulnerability scanning combined with annual penetration testing. The frequency could be driven by the an organisation being a high profile, controversial target (consider how many attacks the NSA must contend with). These days there is no such thing as security through obscurity, attackers are not just targeting URLs; if you where a small unknown company you could of avoided being attacked a decade ago but now with automated scanning tools scanning large swathes of IP addresses your digital footprint will be scanned and probably attacked.

A recommendation for many organisations could be a monthly internal compliance scan, with quarterly internal and external vulnerability scans conducted by a qualified tester and not just automated scans. An annual penetration test on the external infrastructure and applications conducted by a skilled tester. There will be a need to conduct scans when significant changes are implemented on the infrastructure or within applications. If your internal network is segmented into security zones then regular testing of the configuration is required. Social engineering testing such as physical entry should be conducted annually with at an annual phishing test of employees. When new vulnerabilities are reported then a scan of the infrastructure and applications may be required to determine the extent of the vulnerability within the organisation, typically this would be for high and critical rated vulnerabilities. As new controls and re-mediation activities are completed then testing should be conducted to ensure the work has been completed and the vulnerability has been re-meditated sufficiently. The level of testing could be determined by risk and business impact with lower rated systems being vulnerability scanned and high or critical systems undergoing full penetration testing. Organisations may consider using a number of test companies to ensure the widest possible breadth of knowledge is brought against the attack surface to give the best chance of identifying vulnerabilities.

However management should be aware there is always a residual risk that a vulnerability has remained undetected and should not assume a clean test report is a true indication of the security posture of the organisation. Being vigilant and monitoring for signs of intrusion are part of the security profile organisations should be deploying.

Tuesday, 17 February 2015

Firmware attacks

Read an article on the use of hard drive firmware to hide malware (, to me it is not that surprising that it is being used as an attack vector. Firmware in general is a good target; look at the USB controller hacks ( to see what can be done if a device can be attacked

Firmware is software but often stored in rewritable memory, as software it is open to the same bugs and problems as all software and hence have vulnerabilities. It also like software can be written to add additional functionality and rewritten to the storage media. All though vulnerabilities are being discovered in firmware, the firmware in devices is not updated as often as it should. The slowness in updating firmware in network devices contributed to the number of devices being detected as being susceptible to heartbleed in OpenSSL after most software was patched (

What surprised me is the article emphasised the need to have access to the source code. Firmware like software can be re-engineered, I do recall in my early days with a BBC Micro model B, stepping through code in the EPROMS that were used to extend the functionality of the computer.

A good programmer could step though the firmware in a hard drive or any device and determine the functionality of the routines. It should be possible to determine a suitable call that could be overwritten to point to malware and then return the execution back to the original routine.  This will be helped as the firmware is unlikely to complete fill the storage chips, leaving room for malicious code to be appended and stored.

The concept of the attack through the firmware in hard drives that is executed as a computer executes the bootup via BIOS or EFI can be expanded to any device including graphic cards that have code executed during initialisation.

When firmware with an autorun capability is combined with the universal plug and play capability that operating systems have, it is surprising they are not more attacks are not executed through this attack vector.

Think about the all the devices that use that have firmware within your network, or by your users that connect devices to the network. Can you be sure that they are not introducing malware into organisation? Consider a firewall or a multifunction device containing a worm that infects a network when it is plugged in or restarted, the possibilities and permutations are limited by ingenuity.

Monday, 16 February 2015

Browser Cryptography


Back in October I wrote this for my employee's blog. This February the Payment Card Industry Security Standards Council released a bulletin regarding the use of SSL for data protection on the Internet. In the bulletin, the Council states that SSL – a protocol for providing secure communications – is no longer acceptable for secure transactions. This requires payment card providers moving to a TLS encryption only.

Browser Cryptography

In October 2014 we saw a modern attack (POODLE) on a 1990s security protocol (SSLv3), which highlights the fact that, although we consider computing to be a fast-moving field, there are issues with ensuring compatibility with legacy applications and devices, which give rise to security issues. All e-commerce is conducted using secure HTTP (HTTPS), which uses secure sockets layer (SSL) or transport layer security (TLS). These are cryptographic protocols to provide protection. So, what is cryptography and what does it provide in terms of security?

Cryptography and Encryption

Cryptography (the science of ciphers) and encryption (the process of transforming plaintext to cipher text) has been in the news for the last few years following revelations that the US’s NSA, through the NIST, weakened random number generator algorithms. Spy and intelligence agencies like the NSA have broken or bypassed encryption, and there have been various attacks on cryptography protocols, such as Heartbleed and POODLE.

Security is often considered the product of the security triad of confidentiality, integrity and availability. However, it also encompasses a number of other principles that include authorisation, identification and non-repudiation. Cryptography and encryption are some of the tools we can use to provide security.

Although encryption is the main control on confidentiality, and is used by corporate and government bodies to protect our personal data – including credit cards, sensitive data and intellectual property – cryptography is not widely understood.

Cryptography is all around us: we all use it every day while we browse the internet; we rely on it to protect the data we send and receive from secure sites. We know that we should look for the padlock symbol, but very few of us know what it means when it is displayed in a browser. The padlock means the server has been identified by a certificate trusted by one of the 80+ root certificates on your machine, and that encryption is being used. Although the strength of the encryption is not taken into account, it indicates that some form of encryption is deployed between the browser and the server.

SSL and TLS are the protocols used by HTTPS to provide the means of authenticating a server via digital certificates and the public key infrastructure. They are also used to encrypt the communication using a hybrid of symmetric algorithms that encode the data and asymmetric algorithms that ensure the key for the symmetric algorithm is on both the client and the server.

Identification issues

Identification is provided by digital certificates that demonstrate that the URL and the server delivering it are correctly matched… but that does not mean you are talking to the server you want! Think about compared to both can be identified by digital certificates, but only one is the genuine article. Scammers will set up secured sites with common misspellings in an attempt to mislead users. It has been known for a fake site to be rated more highly on search engines than the genuine site itself, enabling scammers to commit fraud against the unaware.

Ciphers and cipher suites

Ciphers (the algorithms used in the encryption process) on servers are graded by strength, which is based on the work factor required to break them. The lower the effort required to break the cipher, the weaker the cipher is rated. Generally, ciphers are classified as weak, medium or strong. In actuality, it is the cipher suites that are examined – these consist of the different algorithms used to create a secure tunnel. A secure tunnel is the named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings for a network connection using the transport layer security (TLS) or secure sockets layer (SSL) network protocols.

Samples of cipher suites:

  • SSL_CK_RC4_128_WITH_MD5

Cipher suites can include a NULL cipher for encrypting data being transferred. Although in the dark ages of the early noughties a browser might have displayed a padlock despite the data being transmitted as plain text, modern browsers will display an error message about plaintext, but a padlock may still be displayed in some browsers.

Servers and clients have a range of cipher suites installed by default. The actual ones installed will depend on the type of machine, vendor and other criteria. As clients and servers alike do not know in advance which other machines they may connect to, they need a range of possible suites installed so there is a chance there is a common dominator between both machines.

Encryption issues

Ciphers cause an overhead in transmitting data (such as latency) due to the need to encrypt and decrypt data as it is transmitted and received. Additional headers and footers are often appended to units of data, thus decreasing the throughput of useful data over a constant-speed connection. As strong ciphers often mean introducing an increased latency, the weakest common denominator is often selected during the negotiation of common cipher suites to reduce the overhead effect. This, however, also reduces the work factor needed to break the encryption, opening the communication up to eavesdropping by attackers.

It is also possible to force servers to downgrade to weaker protocols and ciphers though version rollback and downgrade attacks. These attacks forces the server to use more vulnerable protocols and ciphers, which aids an attacker in compromising the communication using attacks such as POODLE.

This means that, in order to guarantee the use of strong ciphers, one end of the negotiation must only use strong ciphers. This actually requires the server to be configured to have only strong, non-vulnerable cipher suites installed.


The following actions are recommended to ensure a strongly protected server when using SSL/TLS encryption.

  • Weak or medium strength ciphers must not be used:
    • No NULL cipher suites, due to no encryption used.
    • No anonymous Diffie-Hellmann, due to not providing authentication.
  • Weak- or medium-strength protocols must be disabled:
    • SSLv2 must be disabled, due to known weaknesses in protocol design.
    • SSLv3 must be disabled, due to known weaknesses in protocol design.
  • Renegotiation must be properly configured:
    • Insecure renegotiation must be disabled, due to MiTM attacks.
    • Client-initiated renegotiation must be disabled, due to denial-of-service vulnerability.
  • 509 certificate key length must be strong:
    • If RSA or DSA is used, the key must be at least 2048 bits.
  • 509 certificates must be signed only with secure hashing algorithms:
    • Not signed using MD5 hash, due to known collision attacks on this hash.
  • Keys must be generated with proper entropy.
  • Secure renegotiation should be enabled.
  • MD5 should not be used, due to known collision attacks.
  • RC4 should not be used, due to crypto-analytical attacks.
  • Server should be protected from BEAST attack.
  • Server should be protected from CRIME attack; TLS compression must be disabled.
  • Server should support forward secrecy.

It is important to note that some old browsers such as IE 6 are not capable of supporting TLS, and if SSL is removed from the server these clients will not be able to connect.

Friday, 20 June 2014

CodeSpace and protecting your intellectual property

The recent attack on CodeSpaces as reported by Help Net Security shows how cyber attacks can be damaging to an organisations intellectually property. The attack was about availability, the DDoS was designed to prevent access by the clients of CodeSpaces but evolved into the permanent deletion of artefacts. The Incident Response process of CodeSpaces and its Business Continuity and Disaster Recovery (BC&DR) policy was found wanting.

So what happened

The attack on CodeSpaces was an extortion attempt, it is not clear from the CodeSpaces statement when the attacker had gained access to the Amazon EC2 control panel. What is known is that a DDoS attack was launched and a blackmail attempt was initiated with the attacker using a Hotmail account. CodeSpaces currently have no indication that a malicious insider was involved.

When CodeSpace started to investigate they found the attacker had control panel access but not the private keys.  According to their statement on the incident they believed that protected machines had not been accessed. However this did not prevent artefact’s being deleted via the control panel when the attacker realised CodeSpace was attempting to regain control. Codespace reported "In summary, most of our data, backups, machine configurations and offsite backups were either partially or completely deleted." The attackers have now appeared to of delivered what is a fatal blow to CodeSpaces .

How could it of happened

The critical factor to the attacker delivering a fatal blow was the attackers privileged access to the control panel for the hosted environment.

How and when the access was gained is not clear. Access to the Amazon EC2 control could have been obtained through a vulnerability within the control panel, knowledge of the credentials or brute forcing the password. It is unlikely since there has not been a spate of attacks on Amazon EC2 control panel that a vulnerability in the panel was exploited, but rather a social engineering attack on an administrator during the DDoS attempt or the password was brute forced prior to the attack indicating potential a weak password was used are the more likely options.

It could be a credible explanation that whilst trying to prevent the DDoS attack, an administrator might respond to a phishing attempt for credentials when in normal circumstances they more be more suspicious.  It is a common technique of attackers is to launch a DDoS attack to distract the administrators from the activities of hackers trying to break into a site. Whilst administrators are distracted during firefighting the DDoS attack and normal business activities such as responding to log events are ignored, these everyday activities would indicate additional malicious activities are underway.

Incident Response and BC&DR

A key part of any organisations BC&DR activities involves back up and protecting the back up files. CodeSpaces proudly discussed the Backups, Security and Continuity on their web site.

They claimed full redundancy; with data centres in 3 continents, they guaranteed 99% uptime.  For backups they claimed to backup clients data every time a change was made at multiple off-site locations. The backups were supposedly in real-time as they had invested a great deal of time and effort in developing a real-time backup solution that allows us to keep off-site, fully functional backups of clients data. They did state that backups are only as good as the recovery plan and claimed they had a recovery plan that it is well-practiced and proven to work time and time again.

However the password was gained, by having access to the EC2 Control panel the attacker was able to create multiple backdoor access routes and had full control over the artefacts including deleting them, affecting the availability. The attacker may of not been able to breach the confidentiality of the artefacts as they didn't gain access to private keys according to CodeSpaces.

Incident response procedures should of attempted to prevent remote access to the affected systems, in an in-house operation the network cable can be pulled and access obtained via a console. With hosted and cloud services this style of brute force disconnect from the internet is not possible. A better strategy would of been to create a new administrator level account, throw off all logged in users and disable all other accounts from login.

For BC&DR backups not only need to offsite but also stored offline, CodeSpaces were providing resiliency for clients rather than BC&DR for themselves.

Preventing it

With regard to the credentials to the EC2 Control panel, Amazon Web Services customers are responsible for credential management according to Amazon's terms and conditions. Amazon, however, has built-in support for two-factor authentication that can be used with AWS accounts and accounts managed by the AWS Identity and Access Management tool. AWS IAM enables control over user access, including individual credentials, role separation and least privilege.

A key part of any organisations BC&DR activities involves back up and protecting the back up files. Amazon do provide white papers and the tools and services to run BC&DR for an organisation, but it appears not only CodeSpaces ignoring the stronger authentication mechanisms that Amazon provide but they did the same for the support Amazon give to a BC&DR architectures.

The use of the cloud is not a replacement for a well thought out and implemented BC&DR policy.

What's Next

This attack could be conducted against a large number of organisations and not necessarily restricted to those hosted in the cloud. Organisations are not helping themselves in protecting sensitive data, in a recent survey by a  team of researchers from Columbia University ( who discovered by reverse engineering  880,000 applications found on Google Play that the developers had hard coded secret authentication keys in the apps, which can lead to attackers stealing server resources or user data available through services such as Amazon Web Services

Extortion or Blackmail are common threats on the Internet, the BBC have recently reported that Nokia 'paid blackmail hackers millions' ( to keep source code and keys secret. Previously it was the gambling industry that were prone to blackmail attempts via DDoS, however increasingly with organisation dependent on the internet anyone could become a victim.

As it appears that password compromise was the key factor, the secure use of strong passwords must be part of the culture of an organisation, staff awareness combined with strong computer generated random passwords with technology such as passwords vaults and two factor authentication would mitigate attacks on passwords.

Additionally, well designed and implemented disaster recovery an business continuity plans that are tested should be in place. Cyber attacks and the results need to be catered for in the plan.

Wednesday, 4 June 2014

How is your password attacked?

We protect most of our systems and information with authentication credentials consisting of a username and a password. This is single-factor authentication using something we know (the password).

The passwords we use are open to attack, either by guessing the password and using it to log in, or as a result of a breach where user credentials have been stolen and the lists are subsequently attacked.

Below are some common attack methods used against passwords, along with potential countermeasures.

Social engineering

Attackers will attempt to gain your authentication credentials simply by asking. This can also be combined with other attacks to make them more effective. Most passwords are based on something personal; by discovering details about you, the attacker can build a profile of likely words. Think of the film Wargames, in which Matthew Broderick discovers that the creator of WOPR has left a publicly accessible backdoor with his dead son’s name as the password.

Here, the countermeasures are to educate the user about the danger of social engineering and how attackers use social media as a profiling tool.


There are various forms of password sniffing or logging that can be used by an attacker. Typically, sniffing is where credentials sent over the network – in particular over wireless networks – can be intercepted (sniffed) by an attacker recording the transmitted packets. An additional method - Software Keyloggers -  relies on infecting computers with malware that captures key strokes being typed (key logging). This can be combined with screen capture to record the use of virtual keyboards and drop-down boxes (such as the selecting of letters of your password), typically used by banking Trojans. Finally, there are physical key loggers that can be attached or built into a keyboard to capture key strokes. The latest versions of these have wireless interfaces built-in. Physical key loggers were mentioned in some of the reports about the Sumitomo Mitsui Banking Corporation in 2004. Wireless accessible KVM (Keyboard, Video and Mouse) over IP were installed in attacks on Santander and Barclays branches in 2013.

Encryption of traffic over the network, up to date anti-malware on devices and awareness of attempts to install hardware are important countermeasures. The PCI DSS mandates looking for rogue wireless access points, so physical inspection can be combined with checks for malicious hardware.

Password brute forcing

There are various brute force attacks, including attacks on the login screen or against the stored credentials.

Single account

Login screens can be attacked by repeatedly guessing the password and submitting the guess until it is accepted. Lockout mechanisms, such as only allowing four guesses before freezing an account permanently or for a defined period of time, can prevent or slow down these attacks. Captcha can also be used to prevent most automated attacks. Log analysis of failed login attempts should indicate that a potential attack is underway.

Stolen password lists

Stolen passwords lists are often protected by a cryptographic function called a ’hash’; popular forms of hash algorithms are MD5 and SHA1. A hash converts the input into a fixed length message digest; the same input generates the same message digest. An attack will take a guess at a password, which is then hashed using the appropriate algorithm and the resulting message digest is then compared to those in the list of stolen password hashes: a match indicates a correct guess. This is a time consuming process which can be sped up using various techniques including Rainbow Tables, which are pre-computed message digests that can be compared to the stolen password list. If a match is detected, then the plain text version of the password can be found.

To prevent the use of pre-computed hash tables, passwords are often concatenated with a random value (‘salt’) unique to a system before being hashed. Other techniques to protect against brute forcing include using a hash algorithm multiple times: the attacker must know how many iterations were used. The Linux shadow password file contains a line per account; the password field consists of a number of elements that include the hash algorithm, the salt, and the message digest. Linux also applies the selected hash thousands of times.

Password brute forcing can make use of parallel and distributed processing. Some attack methods make use of multiple GPUs in a machine, and each GPU can have thousands of cores. A 25 GPU cluster can process 95^8 combinations in just 5.5 hours, enough to brute force every possible eight-character password containing upper- and lower-case letters, digits, and symbols.

Stolen password lists are often posted on bulletin boards for other hackers to crack, and some hackers offer password cracking services.


Hackers can also use botnets, which can consist of tens of thousands to millions of machines, to attack passwords. They can be deployed to brute force passwords lists, or to brute force account credentials with each machine sending a few guesses to the login page to stay within the account lockout rules – of course, when millions of machines are used, accounts can be guessed and accessed.

Given sufficient resources it is always possible to brute force a password, but a high work factor (defined as the amount of effort – usually measured in units of time – needed to break a cryptosystem) will make it impractical to complete a brute force attack.

Strong passwords are able to resist attempts to crack a user’s credentials. The strength is measured in its effectiveness in resisting guessing and brute-force attacks; this is a function of length, complexity, and unpredictability of the password.


The longer the password, the larger the combination space will be. If we assume just lower case letters then the following applies; as the length of the password increases, the number of potential combinations increases exponentially.

Number of lettersSampleCombination space

For more complex passwords (by adding upper case and numbers) the combination space increases further.

Number of lettersSampleCombination space

Complex does not mean strong

A complex password is not necessarily a strong password, if we look at a typical complex password rule, such as:
  • Minimum eight characters
  • Must use upper and lower case
  • Must use numeric characters
  • Must use symbols
This can result in a password such as:
This is not a strong password, even though it meets the complexity rules. The complexity of a password depends on the combination of the symbols used within the password not being used in a predictable way. The number of available symbols is dependent on the characters accessible through the keyboard and accepted by the application.


Part of preventing a password being broken is its unpredictability. A predictable password would be one found in a dictionary, for example. There is a class of password attacks known as a dictionary attack, in which word lists – often from a dictionary – are used as the source of guesses in the attack. Word lists are not just dictionary lists, but could be lists of football teams or players; the potential source of lists is vast with the Internet having lists of just about every topic from girl’s names to the top million used passwords. This means that, for those Manchester United fans who use a player’s name as their password, there is a list of every player that has ever been in their squad. The tools that perform these attacks automatically switch numbers and symbols for letters based on well accepted rules and will automatically append sequences of numbers to the end of the word. If you used the player’s name Ryan Giggs as the basis of your password – i.e. Ry4nGi66s1973 – this can be guessed by most tools that will accept a list of Man Utd players.

Don’t forget that social engineering or looking at your Facebook page could reveal information that may help an attacker select a word list to use in an attack; you could be making your password more predictable by what you say about yourself online.


In order to create a strong password that is resistant to attack a user must select a password that is long, complex and not based on dictionary words or using ‘leet speak’ to convert letters to numbers or symbols. The longer and more complex it is the more resistant the password will be to attack. Combining passwords (something you know) with a second factor, such as a token (something you have, like your mobile phone), will create a strong authentication system.