Saturday 3 October 2015

CISSP Software Engineering

The 2015 changes to the CISSP common body of knowledge saw the official book discuss the differences between 'Software Engineering' and more 'Traditional Engineering' in the Security Assessment & Testing domain. As part of the explanation of this I developed the following infogram that shows differences.



Sunday 6 September 2015

CISSP v2015

The CISSP certification went through a significant change in April 2015 and was updated and restructured. It now consists of 8 domains and I have started looking at the contents of the new CISSP certification which has 40% new material in it.

Here are the mind-maps that I have generated of the new domains.

The new 8 Domain CISSP

Domain 1: Security and risk management

Domain 2: Asset security

Domain 3: Security Engineering

Domain 4: Comms and network security

Domain 5: ID and access management

Domain 6: Security Assessment

Domain 7: Security operations

Domain 8: Software development
I will be looking at the content of each domain in future posts

Sunday 31 May 2015

PCI DSS and SSL/TLS certificates

This article is aimed at those implementing the PCI DSS v3.1 requirements and those conducting audits to ensure an organisation is compliant. It aims to provide some background around the issues, how encryption is incorporated with the standard and how it can be audited.

The Payment Card Industry (PCI) Security Standard Council (SSC) require merchants and service providers to use industry standards and best practices for strong cryptography and secure protocols. The PCI SSC glossary defines strong cryptography as follows:-

Cryptography based on industry-tested and accepted algorithms, along with strong key lengths (minimum 112-bits of effective key strength) and proper key-management practices. Cryptography is a method to protect data and includes both encryption (which is reversible) and hashing (which is not reversible, or “one way”). At the time of publication, examples of industry-tested and accepted standards and algorithms for minimum encryption strength include AES (128 bits and higher), TDES (minimum triple-length keys), RSA (2048 bits and higher), ECC (160 bits and higher), and ElGamal (2048 bits and higher).

They also refer to NIST documents on guidance on cryptographic key strengths and algorithms. A CISSP certified professional should be aware that cryptographic algorithms have a life-cycle; new stronger algorithms are introduced where the strength relates to the work factor and time need to break or brute force the keys for the algorithms. However, the strength of the cryptographic algorithms weakens with time as Moore’s law and other advances in computing reduce the work factor and time need to break or brute force the keys. Every cryptographic algorithm can be broken if the keys can be derived.

With recent vulnerabilities in SSL and TLS along with vulnerabilities in RC4, the PCI SSC has upped the boundary where strong cryptography can be considered to start. With PCI DSS v3.1 it has removed SSL and early TLS from new implementations and for existing implementation they must be removed by June 30th, 2016. For POS POI devices and the SSL/TLS endpoints, this can be done as long as they can be verified as not being susceptible to SSL/TLS exploits. For existing implementations using SSL and early TLS there must be a formal Risk Mitigation and Migration Plan in place. The PCI SSC have published guidance on migrating from SSL and Early TLS.

Versions of SSL & TLS


SSL v1
Netscape Proprietary protocol
Not published
SSL v2
Netscape Proprietary protocol
Published Feb 1995
SSL v2
Netscape Proprietary protocol
Published 1996
TLS v1.0
IETF Standard (RFC 2246)
Jan 1999
TLS v1.1
IETF Standard (RFC 4346)
Apr 2006
TLS v1.2
IETF Standard (RFC 5246)
Aug 2008
TLS v1.3
IETF Standard
Draft April 2015

The use of RC4 in all versions of TLS is prohibited by RFC 7465 due to attacks on the algorithm that weaken or break it. RFC 7465 is an Internet Engineering Taskforce document published in Feb 2015, which outlines the changes to TLS that should be implemented to remove the use of RC4.

In additional to the 3 requirements listed in the guidance notes from the PCI SSC (highlighted in grey below) there are some additional requirements that involve strong cryptography in the PCI DSS.

PCI DSS v3.1 requirement
Description
Requirement 1.1.6
Documentation and business justification for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure
Requirement 2.2.3
Implement additional security features for any required services, protocols, or daemons that are considered to be insecure.
Requirement 2.3
Encrypt all non-console administrative access using strong cryptography.
Requirement 4.1
Use strong cryptography and security protocols to safeguard sensitive cardholders data during transmission over open, public network
Requirement 8.2.1
Using strong cryptography, render all authentication credentials (such as passwords/phrases) unreadable during transmission and storage on all system components.

For existing implementations of SSL and early TLS when be audited they would need to include the usage under insecure protocols in requirement 1.1.6 and include a business justification for its usage.

If SSL and early TLS is being used, it should be configured to reduce the risk of being exploited and these additional measures should be covered under requirement 2.2.3

For requirement 2.3 and 2.4 strong TLS should be used, if SSL and early TLS is used they should be a risk mitigation and migration plan in place and the SSL and early TLS should be replaced by 30th June 2016.

Requirement 8.2.1 again strong TLS should be used if not similar controls to those for requirement 2.3 and 2.4 should be in place.

Testing the connection


If a HTTPS based connection is used than the strength of the connection can be determined through a browser. Browser’s these days present security information about the connections, although different browsers do it different ways. With Firefox it is easier to determine the encryption strength as shown in the screenshot below.


With Internet explorer using the menu and file properties the encryption properties of the currently selected tab can be viewed. Additionally the server certificate can be viewed, along with the trust chain.



If you examining the validity period of the Google certificate, you will notice it is only valid between May 6th, 2015 and Aug 4th 2015. Whilst this is a short period it demonstrates a point that security professionals and cryptologists understand is that the more frequent a key is used the increased chances of statistically analysis determining something useful about the key, which will help with break it. With hybrid cryptographic systems such as TLS and SSH, they use the public/private asymmetric encryption to dynamically generate a symmetric secret key for encrypting the data stream used to protect the information exchanged.

The PCI DSS standard v3.1 has a number of requirements involving the life-cycle of cryptographic keys and these will apply to digital certificates as well as any other form of encryption. 

PCI DSS v3.1 requirement
Description
Requirement 3.5
Document and implement procedures to protect keys used to secure stored cardholder data against disclosure and misuse
Requirement 3.5.2
Store secret and private keys used to encrypt/decrypt cardholder data in one (or more) in a secure manner at all times
Requirement 3.5.3
Store cryptographic keys in the fewest possible locations.
Requirement 3.6
Fully document and implement all key-management processes and procedures for cryptographic keys used for encryption of cardholder data, including the following
Requirement 3.6.1
Generation of strong cryptographic keys
Requirement 3.6.2
Secure cryptographic key distribution
Requirement 3.6.3
Secure cryptographic key storage
Requirement 3.6.4
Cryptographic key changes for keys that have reached the end of their cryptoperiod (for example, after a defined period of time has passed and/or after a certain amount of ciphertext has been produced by a given key), as defined by the associated application vendor or key owner, and based on industry best practices and guidelines
Requirement 3.6.5
Retirement or replacement (for example, archiving, destruction, and/or revocation) of keys as deemed necessary when the integrity of the key has been weakened (for example, departure of an employee with knowledge of a clear-text key component), or keys are suspected of being compromised.
Requirement 3.6.6
If manual clear-text cryptographic key-management operations are used, these operations must be managed using split knowledge and dual control.
Requirement 3.6.7
Prevention of unauthorized substitution of cryptographic keys.
Requirement 3.6.8
Requirement for cryptographic key custodians to formally acknowledge that they understand and accept their key-custodian responsibilities.

To cover the requirements listed above there will need to be suitable policies, procedures, standards, baselines and guidelines in place.

The cryptoperiod for a site with extremely high levels of traffic will have a shorter validity period than one with minimal amount of traffic. NIST produce a guideline on cryptoperiod which is mentioned in the PCI DSS standard.

Certificates can have a defined key usage field that defines the activities the certificate can be used for; it could indicate that the key should be used for signatures but not for encipherment. The correct certificate must in place on the server.

VeriSign and other bodies use classes of certificates to determine the level of authentication they use to verify the identity of the organisation requesting the certificate.
  • Class 1 for individuals, intended for email.
  • Class 2 for organizations, for which proof of identity is required.
  • Class 3 for servers and software signing, for which independent verification and checking of identity and authority is done by the issuing certificate authority.
  • Class 4 for online business transactions between companies.
  • Class 5 for private organizations or governmental security.
Checking for class 1 is done by simply checking if the requester responds to the email address the certificate is for, this level of checking is not suitable for an ecommerce operation.

Certificates need to be trusted by a recognised certification authority (CA), self-signed certifications, or certification issued by untrustworthy CA should not be used. A certificate trust chain should lead to a certificate from a CA that is recognised by a computer without installing a certificate for the CA.

Within the EU there is an EU Directive 1999/93/EC on "a Community framework for electronic signatures" which defines the term qualified certificate and the requirements an issuer must meet to issue such a certificate.

There are a lots of documents describing best practice for digital or X.509 certificates that are either vendor or platform dependent. The recommendation is to find the relevant best practice for the platform and infrastructure you are operating.

In addition to examining a certificate using the tools within browsers it is possible to download the certificate and use tools such as openSSL to display all the fields of the certificate. The following command is an example of how to output the certificate in a readable form. The certificate will need to be in a suitable form such as .pem or 64base encoded .cer if exported through windows operating system.

OpenSSL> x509 -in google.cer -noout -text

In addition to the certificates installed, a server will support a range of cipher suites that support various combinations of encryption algorithms and key lengths to exchange keys, protect data streams, and provide integrity, non-repudiation functionality.

There are a number of naming conventions in use for cipher suites, the most common format is that based on openSSL naming convention that defines the encryption for the four basic functions of a cipher suite.
  • key exchange,
  • server certificate authentication,
  • stream/block cipher
  • message authentication
An example of this would be DHE-RSA-AES256-SHA which breaks down to the following.
  • DHE for key exchange, 
  • RSA for server certificate authentication, 
  • 256-bit key AES for the stream cipher, and 
  • SHA for the message authentication.
For PCI DSS compliance the server should only support cipher/key length combination considered to be strong for it to meet the requirements.

When clients connect to servers as part of the negotiation they decide on which ciphers to use and exchange keys. The cipher has to be common on both devices and as an organisation operate a server they generally don’t control of what is installed on clients and have a range of ciphers installed.

As a cryptologist/security professional should understand the negotiation does not go for the strongest cipher as there is considerable computational overhead in strong ciphers and long key lengths but often selects the weakest common denominator. Therefore to force strong cryptography the server should only have strong ciphers installed.

Tools such as Nessus, sslscan, ssltest can list the installed cipher suites and ssltest (operated by Qualys SSL Labs) can give a rating to the security of the server certificate and installed ciphers.

Scanning tools can determine the set-up of SSL/TLS and the features that have been enabled, the exposure to currently known vulnerabilities, the installed ciphers and the preferred ciphers. The output of the tools does need to be interpreted by security professionals who understand cryptography.

Additional testing


For compliance testing there is a need to demonstrate that strong cryptography which renders all authentication credentials (such as passwords/phrases) and card data unreadable during transmission is proved.

As requirement is to ensure that secure channel is created before the information is exchanged the definitive method will be to use a packet sniffer and look at the communication session being created and ensure a secure channel was created and the data is not being transmitted in cleartest.
In order not to affect either the server or client and ensure that the transmitted packets are being monitored, a network tap or the span or mirrored port on a switch can be used to monitor the communication as shown.

The packet sniffer is isolated from the endpoints, it can’t give a false positive by reading packets before they have been encrypted which is a possibility if the packet sniffing software was installed on an endpoint.


The network tap or switch configured to span or mirror a port can be used to monitor the traffic between endpoints, the packet sniffer will pick up all traffic passing over the monitored connection and will need filtering to remove the frames and packets not relevant to the testing.
Using tools such as tcpdump, wireshark and network forensic analysis tools can help an auditor to accurately determine in the connection and the date being transferred is securely encrypted.

Resources


SSL Server Test is a free online service operated by Qualys SSL Labs that performs a deep analysis of the configuration of any SSL web server on the public Internet.

The OpenSSL Project is a collaborative effort to develop a robust, commercial-grade, full-featured, and Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS) protocols

SSLScan queries SSL services, such as HTTPS, in order to determine the ciphers that are supported. SSLScan is designed to be easy, lean and fast. The output includes preferred ciphers of the SSL service, the certificate and is in Text and XML formats.

tcpdump, a powerful command-line packet analyzer; and libpcap, a portable C/C++ library for network traffic capture.

Wireshark is a network protocol analyzer for Unix and Windows.





Sunday 10 May 2015

Root Servers

When discussing networking and how the internet works as part of some of the courses I deliver the topic of DNS comes up as a security risk. A question asked is whether the root servers could be taken offline by a DDoS attack. There have been attempts at doing this and the two most notables ones are:-

October 21, 2002

On October 21, 2002 an attack lasting for approximately one hour was targeted at all 13 DNS root name servers. The attackers sent many ICMP pings using a botnet to each of the servers. However, because the servers were protected by packet filters which were configured to block all ICMP pings, they did not sustain much damage and there was little to no impact on Internet users.

February 6, 2007

On February 6, 2007 an attack began at 10 AM UTC and lasted twenty-four hours. At least two of the root servers (G-ROOT and L-ROOT) reportedly "suffered badly" while two others (F-ROOT and M-ROOT) "experienced heavy traffic". The latter two servers largely contained the damage by distributing requests to other root server instances with anycast addressing. ICANN published a formal analysis shortly after the event

A DDoS attack may of been possible in the early days of the internet however the resilience and security that have been put in place since then would make it unlikely, unless the biggest ever attack ever seen on the internet was conducted

Root servers resolve the top level domains (TLD) such as .uk, .com or .xxx and are critical to the operation of DNS. According to the Root Server Technical Operations Site there are 13 critical servers with multiple instances of each server using anycast addressing to distribute them around the world.

Root Server Operater Instances
A Verisign, Inc. 5
B Information Sciences Institute 1
C Cogent Communications 8
D University of Maryland 69
E NASA Ames Research Center 12
F Internet Systems Consortium, Inc. 58
G U.S. DOD Network Information Center 6
H U.S. Army Research Lab 2
I Netnod 49
J Verisign, Inc. 81
K RIPE NCC 17
L ICANN 150
M WIDE Project 7
13 Servers 12 Operators 465 Instances

Location of root servers worldwide



Sunday 15 March 2015

Shadow IT: Centralised vs distributed IT Management

Historical when computers were first introduced into a company it was through individual departments, typically Finance and they purchased their own computer systems. As the usage of computers grew and IT become first a support function and then a core part of the business. The structure of organisations changed with as they introduced an IT department which managed IT and ensured commonality across the whole of the organisation. The governance of IT become centralised within the IT department.

IT Governance is a subset of corporate governance, focused on information and the technology and the performance and risk management around the handling of information and the technology. It is how organisations align their IT strategy with business mission, ensuring they stay on track to achieve their strategies and goals. 

The use of IT has continued to mature throughout organisations and IT has become a platform or service on top of which the functions of the company are built. If you examine an organisation today there is a core platform of servers, workstations and networks which underpin the finance systems, sales, marketing, production and other activities. Each of these activities has different requirements and expertise. IT decision on spending is becoming dispersed throughout organisations, according to a survey conducted by BT of 1,000 IT "decision-makers". This has been backed by research by Garner which estimates by 2020, 35% of organisations’ technology budget will be spent outside the IT department.

This is creating “shadow IT”, and has been given impetus by the growth of consumer technology and cloud computing, which make it increasingly easy to deploy technology without going through the corporate IT department. With businesses under pressure to innovative, flexible and adaptive it has been realised they can often deploy solutions more rapidly by bypassing the IT department. BT’s study showed nearly three-quarters of respondents say they are more concerned about security with the move to a more distributed approach to IT. The various departments are very keen to purchase and deploy IT based solutions however they don’t want to support them or take responsibility for them working and are happy for central IT to provide this function.

Ensuring that shadow IT is subject to proper governance is a challenging task for CIOs. Part of the solution is by supporting the business in meeting its objectives by liaising with all parts of the business. They are the experts on what they need; they need support on ensuring the requirements can be met within the corporate governance framework. Shadow IT should not be considered a problem but should be adopted as part of a distributed IT function.

Friday 13 March 2015

Shadow IT – what are the risks?

Increasingly within organisations a shadow organisation is building up and will threaten the security of the overall organisation. This is not the mafia or a criminal sub culture, but an alternative to the organisations IT department.

Citizen Programmers + Rogue Devices + BYOD + Tech Savvy Employees = Shadow IT

Increasing, as the workforce becomes more tech savvy as the millennium generation are starting to become predominant as employees. Each department has its own group of geeks that the rest of the department turn to as first line of support. I have seen this everywhere I have worked, people like myself are asked questions or asked to fix things as we are immediately available and often understand IT and the business function and give advice quicker and trusted more than IT support who can live up to the reputation of the IT Crowd and associated with the phase “Turn it off and turn it on again”

In the 21st century business are increasingly facing employees who are “citizen programmers” where they have developed their own applications with macro programming languages in a lot of business software to manipulate raw data and draw useful information and reports. Citizen programmers can generate applications that become mission critical in the way they draw useful information from the organisation’s data. These applications are outside the control of IT and often not known to those doing the BC&DR activities.

The tech savvy employees and often those less technical aware are bring consumer technology into the office either as part of BYOD or often as rogue devices that IT and the organisation know nothing about. These can introduce a range of attack vectors that the organisation may not be aware of and unable to put appropriate controls in. I have seen employees set-up Google remote desktop to allow remote access to their workstation so they can be more productive out of the office and IT have not been aware of this remote access channel.

So what are the risks of this shadow IT within your organisation?

  • No governance of the activities
  • Lack of security awareness and alignment with business mission
  • Increased risk of data leakage
  • Increased attack surface area
  • Dependence on unknown and uncontrolled applications


What can be done, IT like cyber security needs to be aligned with the business needs and this requires better integration with the end users to ensure they can do their jobs in a secure manner that does not affect productivity, allow initiative and innovation but does not impact on security which is the triad of confidentiality, the integrity and availability of assets.

Thursday 12 March 2015

Forthcoming talk

Hacking the Internet of Things (IoT)


Thursday 14 May 2015

8.00pm at the offices of Sopra Steria, Hemel Hempstead, HP2 7AH

The IoT is a paradigm of how devices are now interconnected by various media to each other locally and across the Internet, allowing them to exchange information or to interact with us. You can control the heating in your home from a smartphone or monitor the hundreds of buoys free floating in ocean currents. IoT has great potential for aiding both us and malicious activities. This talk discusses the IoT and its potentials, followed by discussions and demonstrations of how the IoT can be hacked to reveal details of our interactions or take control of the environment around us.

It includes a demonstration of RFID can be compromised by looking at an attack on a RFID based door access controller.

RFID Cloner

RFID Door COntroller
The event is being organised by the Hertfordshire branch of the BCS, details of the talk are on their event page

If you wish to attend this meeting, please would you book your places using this booking link.

Tuesday 10 March 2015

What is phone hacking?

Phone hacking according Q762 on the ask the police website (https://www.askthe.police.uk/content/Q762.htm)  is where people gain unauthorised access to information that is held on a mobile telephone, in most cases these are voicemail messages. It goes onto explain that mobile phone companies set up a default voice mail service for all mobile telephones. This service can then be accessed from other telephones (both mobile and land-line) by dialling your mobile telephone number. Once the voicemail service message begins, all a hacker has do is dial * and enter a PIN number, which is a default PIN number unless it has been changed. It is this type of hacking that the newspapers in the UK have been accused and admitted to doing. This type of hacking can be stopped by changing the default PIN and not giving the PIN to anyone.

However phone hacking is more than this simple example of almost social engineering, for example I would identify the following as phone hacking activities

  • Phreaking
  • VoIP hacking
  • Voice mail hacking
  • Mobile phone network hacking
  • Insecure wifi usage
  • Smart phone app security

All of these can result in an unintended opportunity, ranging from free phone calls to intercepting and retrieving information.

  • Phreaking involved manipulating the plain old telephone system that used to tones to control switching and functionality. By reverse engineering the tomes pheakers could route long distance calls for example. 
  • VoIP involves the transfer of voice within the data packets on an internet protocol (IP) network. The hacking of VoIP allows eavesdropping, control of VoIP based private branch exchanges (PBX), the routing of phone calls and other activities.
  • Voice mail hacking allows the retrieval of voice messages often by using default PIN numbers
  • Mobile phone networks use a number of telecommunication protocols that have been hacked allowing interception of mobile phone calls and other malicious activities
  • A lot of mobile devices including phones can make use of WiFi networks and in some instances route phone calls over WiFi connections using VoIP and related technologies. WiFi is difficult to secure and data can be intercepted.
  • The top of the range phones now all come with apps, insecure doing practice and in cases malicious programming allows data leakage from phones due to the vulnerabilities in apps installed on the phone, or the apps can take control of the phone causing it to make premium rate connections via voice, data and sms.

I will be looking at some of the phone hacking techniques and countermeasures over the next few months as I prepare a talk on the topic.

Sunday 1 March 2015

PCI DSS: outsourced eCommerce

A presence on the internet is considered essential for business; the UK government have a digital inclusion policy to get SME's online and being part of the digital economy. However for many small companies go online and taking payments for services online is new and uncharted territory.

Many companies don't appreciate the governance around trading within the digital economy with issues such as the Payment Card Industry Data Security Standard (PCI DSS) and distance trading regulations part of wide range of regulations, standards and requirements that a company must get to grips with.

PCI DSS


The PCI DSS was initiated by the Payment brands (VISA, Mastercard, American Express, Discover and JCB) to combine their individual security requirements into a single set of requirements. The standard is developed by the PCI Security Standard Council (SSC). It contains mandatory requirements for the storing, processing or transmission of cardholder data and includes anything that might affect the storing, processing or transmission of cardholder data. Merchants who receive payments from payment card from the 5 brands are responsible for ensuring the payment collection process is compliant to the standard. Merchants cannot delegate the accountability, even if all payment process is done by 3rd parties the Merchant still is subject to requirements of ensuring the 3rd parties are compliant,

One of the pit falls I came across when advising companies about the PCI DSS occurs when they have already got an eCommerce presence online before attempting to gain PCI DSS certification and their existing eCommerce operation is not compliant with the requirements of the standard.

For example, they have a website designed, hosted and managed by 3rd parties to card payment online rather than do it themselves; this is a good option form many companies as they may not have the expertise. However they find that instead of it being an easy process, it has become very difficult due to the use of suppliers that are not compliant to the PCI DSS requirements.

Outsourced eCommerce Compliance


For this type of situation of outsourced eCommerce; for companies not meeting the level 1 merchant status; there is a cutdown version of the questionnaire know as Self-Assessment Questionnaire SAQ A "Card-not-present Merchants, All Cardholder Data Functions Fully Outsourced". It was been developed by the SSC to address requirements applicable to merchants whose cardholder data functions are completely outsourced to validated third parties, where the merchant retains only paper reports or receipts with cardholder data.

The eligibility criteria for completing a SAQ A is given with the document; however the critical point is cardholder data functions are completely outsourced to validated third parties. Validated parties means the service providers must be PCI DSS Compliant for the services they deliver and this includes the following services.


  • Website Design
  • Physical Hosting
  • Managed Hosting
  • Payment processing


There is a distinction between being certified for being a merchant and being certified for services offered. A service provider will have an Attestation of Compliance (AoC) for either a RoC or a SAQ D for service providers where the AoC will state the services being covered.

Some companies get caught out because their service provider is certified as a Merchant for taking payment and may not have the service being offered covered by the certification.

For example; you could pay for the creation and hosting of a website from a website design company that take payment by credit card. They may have outsourced their eCommerce operation and completed a SAQ A themselves. When asked for evidence of compliance, they may offer the SAQ A as proof of certification, but this only covers their merchant activity and not their software development and hosting services. They should have an SAQ D for service providers to prove their services are compliant and present the AoC for this when requested.

The Problem


In my experience companies get caught out by having a website designed and hosted and then find they have to be compliant to the PCI DSS when their acquiring bank asks for a SAQ to be completed. At this point they find out that their suppliers are not PCI DSS compliant for the services contracted and also they don't have sufficient information to complete a SAQ D; which is the self-assessed version of the full set of requirements, as they don't have control over the hosting or management of the website.

This leaves them in the situation where they have been asked by their acquiring bank to demonstrate compliance and they are unable to meet the request.

The options are


  • Ask the suppliers to become compliant
  • Audit the suppliers as part of the companies compliance
  • Change to a certified supplier


None of these options are attractive or easy to complete. Whilst a company is non-compliant they could be fined by the acquiring bank monthly, pay additional transaction costs or in extreme cases have the ability to process payment cards removed.

Advice


My advice for companies thinking about starting an eCommerce operation is to contact an expert in the PCI DSS and get advice on the standard before actually implementing the website. This can save a lot of hassle, time and money in the long term,

There should also be more effort by governments, acquiring banks, payment brands and payment processors to makes sure those new to online payments can get the right advice.

Saturday 28 February 2015

Security Threats

Some of the security threats faced by businesses are shown on diagram below that I drew a number of years ago. Although still relevant I'm aiming to update it.

How many do you face on a daily basis,

Thursday 26 February 2015

What is SSL

SSL has been in the news over the last year for having a number of high profile vulnerabilities, but outside of the world of the encryption specialist the understanding of what it does is limited. This is fine as security tools such as SSL & TLS are suppose to be transparent to the end user.

What is SSL


The Secure Socket Layer (SSL) was a proprietary security protocol developed by Netscape, only versions 2 and 3 have been widely used. Version 2.0 was released in February 1995 and version 3.0, released in 1996. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101

The protocol is based on a socket which is the address of a service and consist of the internet protocol address, the port number and protocol being used.

Hence the following are two distinct sockets

  • 127.0.0.1:53 UPD
  • 127.0.0.1:53 TCP

As the SSL suggest this is a form of security based around the socket on a server. Typically we expect to see SSL on secure website indicated by HTTPS in the URL (address) bar and the padlock.

Secure HTTP typically uses socket TCP 443, whilst plaintext HTTP will use socket TCP 80.

A key component from a user's prospective is the digital certificate associated with the SSL connection.

To create an SSL certificate for a web server requires a number of steps to be completed, a simplified

  1. Generation of identity information including the private/public keys
  2. Creation of a Certificate Signing Request (CSR) which includes only the public key
  3. The CSR is sent to a CA who validates the identity
  4. CA generates a SSL certificate that is then installed on the server

The process of proving the identity of a server is cover by my blog on PKI.
 A server will require to have  a number of cipher suits installed which allow it to negotiate to use a common cipher that is available on both the client and server. A cipher suite consists of a number of parts  
  • a key exchange algorithm
  • a bulk encryption algorithm
  • a message authentication code (MAC) algorithm
  • a pseudorandom function (PRF)

 Examples of some cipher suites that might be found on a server are  
  • SSL_RC4_128_WITH_MD5
  • SSL_DES_192_EDE3_CBC_WITH_MD5
  • SSL_RC2_CBC_128_CBC_WITH_MD5
  • SSL_DES_64_CBC_WITH_MD5
  • SSL_RC4_128_EXPORT40_WITH_MD5
 
For a security service to be considered secure it should support only strong cryptographic algorithms such as those defined in the PCI DSS glossary v3
 
  • AES (128 bits and higher)
  • TDES (minimum triple-length keys)
  • RSA (2048 bits and higher)
  • ECC (160 bits and higher)
  • ElGamal (2048 bits and higher)

For additional on cryptographic key strengths and algorithms. information see NIST Special Publication 800-57 Part 1 (http://csrc.nist.gov/publications/)
 

SSL Handshake

 
The steps involved in the SSL handshake are as follows
 
  1. The client sends the server the client's SSL version number, cipher settings, session-specific data, and other information that the server needs to communicate with the client using SSL.
  2. The server sends the client the server's SSL version number, cipher settings, session-specific data, and other information that the client needs to communicate with the server over SSL. The server also sends its own certificate, and if the client is requesting a server resource that requires client authentication, the server requests the client's certificate.
  3. The client uses the information sent by the server to authenticate the server. If the server cannot be authenticated, the user is warned of the problem and informed that an encrypted and authenticated connection cannot be established.
  4. Using all data generated in the handshake thus far, the client creates the pre-master secret for the session, encrypts it with the server's public key (obtained from the server's certificate, sent in step 2), and then sends the encrypted pre-master secret to the server.
  5. Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity.
  6. The client sends a message to the server informing it that future messages from the client will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the client portion of the handshake is finished.
  7. The server sends a message to the client informing it that future messages from the server will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the server portion of the handshake is finished.
  8. The SSL handshake is now complete and the session begins. The client and the server use the session keys to encrypt and decrypt the data they send to each other and to validate its integrity.
  9. This is the normal operation condition of the secure channel. At any time, due to internal or external stimulus (either automation or user intervention), either side may renegotiate the connection, in which case, the process repeats itself.
 

Attacks on SSL

 
The SSL 3.0 cipher suites have a weaker key derivation process; half of the master key that is established is fully dependent on the MD5 hash function, which is not resistant to collisions and is, therefore, not considered secure. In October 2014, the vulnerability in the design of SSL 3.0 has been reported, which makes CBC mode of operation with SSL 3.0 vulnerable to the padding attack

Renegotiation attack

A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. It allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server

Version rollback attacks

an attacker may succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite strength, to use either a weaker symmetric encryption algorithm or a weaker key exchange. In 2012 it was demonstrated that some extensions to the orginal protoclas are at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data

POODLE attack

On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes cipher block chaining (CBC) mode of operation with SSL 3.0 vulnerable to the padding attack (CVE-2014-3566). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption).

BERserk attack

On September 29, 2014 a variant of Daniel Bleichenbacher’s PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security: Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a MITM attack by forging a public key signature.

Heartbleed Bug

The Heartbleed bug is a serious vulnerability in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the data payloads. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).
 
 

Friday 20 February 2015

Trust within the PKI


The Public Key Infrastructure is the backbone of eCommerce on the Internet, however it is something that most users don't understand.

Starting from a general user prospective: when doing anything that requires confidentiality of information they are told to look for the padlock in the browser and check for https at the start of the URL. There are told to trust the padlock.

What the padlock means is two fold:

  • The web server has had its identity authenticated
  • The communication with the server is protected

Here I want to discuss what does it mean when we are told the web server has been authenticated. Well authentication is:-
  1. Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be.
With a web server its identity will be authenticated by the issuer of the digital certificate that has been installed on the server.

The owner (admin) of the web server has gone to a certification authority (CA) and proved the identity of the server to the satisfaction of the CA. The CA who can who can issue digital certificates once they are happy with the identity will issue a digital certificate with the name of the server on it, the public key of the server and a signature of the CA. The signature is a block of text that has been encrypted with the private key of the issuing CA.

In English, we trust the identity of the server based on a 3rd party verifying the identity of the server to an indeterminable standard. The 3rd party (the issuing CA) is an organisation we likely know nothing about has used a process to determine the identity that we know nothing about. The issuing CA could of done a simple check such as the requester responded to an email sent to an address on the application or it could be a detail check of identity such as papers of incorporation of an organisation, financial background etc.

Basically we become very trusting because someone who we don't know has said the web server is trustworthy. This seems very bad on the facing of it, however with the PKI we actually have a method of getting some assurances; although it does rely on trusting someone at some point.

With the PKI, there are root CA's organisations that through checks based on the IETF and ITU recommendations on PKI infrastructure have proved their identity to operating system and browser vendors and consequently have had their public key included in the operating system and browser. Each root CA then performs check on child CA's to the recommendations to verify their identity. This continues with each CA in the chain verify the next link until the CA that issued a certificate to the web server is verified by its parent CA creating a tree hierarchy. Thus the chain of trust is built up.

Back to the certificates: a public key on a certificate can allow a message encoded with the public key's corresponding private key to be read. This way the public key of the root CA certificates in the browser can be used to read the signature on their child CA issued certificates. This way each certificate issued by the chain can be verified and authenticated, meaning the certificate issued to the web server can be verified and the identity of the server can be authenticated.


PKI Trust chain


So providing the trust model is upheld with no certificates issued without the identity of the receiving entity being checked thoroughly, we can rely on the authentication of the entity. There lies the problem, we need robustness in the PKI to prevent fraudulent certifications issued or CA's being hacked and certifications issued without proper authorisation.

Thursday 19 February 2015

Internet connect toys: what is the risk?

It has been announced that Mattel is partnering with US start-up ToyTalk to develop Hello Barbie (https://fortune.com/2015/02/17/barbies-comeback-plan/) , which will have two-way conversations with children using speech-recognition platform connected across the Internet.

Initial thoughts were; is this hackable, and could it be used to abuse or groom children. Embedded platforms are often limited in the ability to deploy strong countermeasures such as encryption due to limited processing power; despite modern processors becoming more powerful, and they are difficult to update if a vulnerability is discovered.

A similar toy Vivid Toy’s ‘Cayla’ Talking Doll has been found to be vulnerable to hacking by security researcher Ken Munro, from Pen Test Partners (http://www.techworm.net/2015/01/vivid-toys-cayla-talking-doll-vulnerable-hacking-says-security-researcher.html)

The risk to abuse children through insecure internet connect devices was demonstrated by baby monitor being hack and abuse been shouted at a baby (https://www.yahoo.com/parenting/nanny-freaks-as-baby-monitor-is-hacked-109405425022.html). The device from Foscam has been patched with the update available, but a search of Shodan shows device still susceptible after the patch was made available as deployed devices where not updated.

The toy will listen to the child's conversation and adapt to it over time according to the manufacturer - so, for instance, if a child mentions that they like to dance, the doll may refer to this in a future chat. The device relies on a WiFi connection and the speech is likely to be processed by a remote facility. Listening devices have already proved to be controversial with privacy as demonstrated by Samsung (http://www.theregister.co.uk/2015/02/09/samsung_listens_in_to_everything_you_say_to_your_smart_tellie/). The doll is not likely to have a sophisticated control panel so will the connection to WiFi be through WiFi Protected Setup (WPS), this has already been the subject of attacks (http://null-byte.wonderhowto.com/how-to/hack-wi-fi-breaking-wps-pin-get-password-with-bully-0158819/).

Could the toy be used for abuse, I think the answer is that someone will learn how to attack the toy, whether it will be used in real life is another matter but the impact if it was is such that I hope security has been designed in from the start.

What could be done?

Mattel have not given many details on the security measures deployed to protect the toy and information recorded. It could be considered that the information is PII and therefore subject to the data protection laws.

The question is should there be a mandatory level of protection that manufacturers will have to meet to protect internet connected devices if they are to be used by children or potential pick up sensitive information. Should a kitemark scheme be introduced to show the manufacturer has meet the minimum levels of due care and due diligence?

Wednesday 18 February 2015

Penetration testing as a tool

When protecting your organisation's assets penetration testing by an ethical hacker is an useful tool in the information security team arsenal.  Penetration testing is used to test the organisation security countermeasures that have been deployed to protect the infrastructure; both physical and digital, the employees and intellectual property. Organisations need to understand the limitations of penetration testing and how to interpret the results in order to benefit from the testing.

It may be used both pro-actively to determine attack surfaces and the susceptibility of the organisation to attack, as well as reactively to determine how wide spread a vulnerability is within the organisation or if re-mediation has been implemented correctly.

Penetration testing is a moment in time test, it indicates the potential known vulnerabilities in the system at the time of testing.  A test that returns no vulnerabilities in the target system does not necessarily mean the system is secure. An unknown vulnerability could exist in the system that tools are not aware of, or the tool itself may not be capable of detecting. This can lead to the organisation having a false sense of security. I refer you to the case of heartbleed, which existed in OpenSSL for two years before being discovered and may of been a zero day prior to the public annoucement.

A penetration test carried by a good ethical tester will include the usage of a variety of tools in both automatic and manual testing modes driven by a methodologies that ensures attack vectors are not overlooked, mixed with knowledge and expertise of the tester. A tool is only as good as the craftsman wielding it.

The term ethical hacker or tester means that they will conduct authorised tests within the agreed scope to the highest levels of ethical behaviour; they will not use information obtained for the own purposes, financial gain or exceed the agreed limits.

An organisation needs to plan careful it's use of penetration testing in order to maximise the benefits.

It can be part of Information Security Management System and will typically be used in the following areas:-


  • Risk management: Determining vulnerabilities within the organisation and the attack surface area
  • Vulnerability management: Detecting vulnerabilities presence the organisation, determining the effectiveness of re-mediation
  • Assurance audit: Testing implemented countermeasures
  • Regulatory compliance: Part of auditing to determine controls are implemented


The testing strategy has to be developed to meet the organisations requirements which should be driven by its mission objectives and risk appetite. It needs to be cost effective in that the testing delivers results by giving some assurance on the attack surface area and that the vulnerability management programme is being effective and controls are working. The frequency of testing will depend on factors such as how dynamic the organisation is; for example is the footprint of the organisation evolving the whole time, are their frequent changes to infrastructure and applications or is more less constant with no changes. It will also depend on regulatory and standard compliance activities, the PCI DSS specifies at least quarterly internal and external vulnerability scanning combined with annual penetration testing. The frequency could be driven by the an organisation being a high profile, controversial target (consider how many attacks the NSA must contend with). These days there is no such thing as security through obscurity, attackers are not just targeting URLs; if you where a small unknown company you could of avoided being attacked a decade ago but now with automated scanning tools scanning large swathes of IP addresses your digital footprint will be scanned and probably attacked.

A recommendation for many organisations could be a monthly internal compliance scan, with quarterly internal and external vulnerability scans conducted by a qualified tester and not just automated scans. An annual penetration test on the external infrastructure and applications conducted by a skilled tester. There will be a need to conduct scans when significant changes are implemented on the infrastructure or within applications. If your internal network is segmented into security zones then regular testing of the configuration is required. Social engineering testing such as physical entry should be conducted annually with at an annual phishing test of employees. When new vulnerabilities are reported then a scan of the infrastructure and applications may be required to determine the extent of the vulnerability within the organisation, typically this would be for high and critical rated vulnerabilities. As new controls and re-mediation activities are completed then testing should be conducted to ensure the work has been completed and the vulnerability has been re-meditated sufficiently. The level of testing could be determined by risk and business impact with lower rated systems being vulnerability scanned and high or critical systems undergoing full penetration testing. Organisations may consider using a number of test companies to ensure the widest possible breadth of knowledge is brought against the attack surface to give the best chance of identifying vulnerabilities.

However management should be aware there is always a residual risk that a vulnerability has remained undetected and should not assume a clean test report is a true indication of the security posture of the organisation. Being vigilant and monitoring for signs of intrusion are part of the security profile organisations should be deploying.

Tuesday 17 February 2015

Firmware attacks

Read an article on the use of hard drive firmware to hide malware (http://www.reuters.com/article/2015/02/17/us-usa-cyberspying-idUSKBN0LK1QV20150217), to me it is not that surprising that it is being used as an attack vector. Firmware in general is a good target; look at the USB controller hacks (https://srlabs.de/badusb/) to see what can be done if a device can be attacked

Firmware is software but often stored in rewritable memory, as software it is open to the same bugs and problems as all software and hence have vulnerabilities. It also like software can be written to add additional functionality and rewritten to the storage media. All though vulnerabilities are being discovered in firmware, the firmware in devices is not updated as often as it should. The slowness in updating firmware in network devices contributed to the number of devices being detected as being susceptible to heartbleed in OpenSSL after most software was patched (http://www.technologyreview.com/news/526451/many-devices-will-never-be-patched-to-fix-heartbleed-bug/).

What surprised me is the article emphasised the need to have access to the source code. Firmware like software can be re-engineered, I do recall in my early days with a BBC Micro model B, stepping through code in the EPROMS that were used to extend the functionality of the computer.

A good programmer could step though the firmware in a hard drive or any device and determine the functionality of the routines. It should be possible to determine a suitable call that could be overwritten to point to malware and then return the execution back to the original routine.  This will be helped as the firmware is unlikely to complete fill the storage chips, leaving room for malicious code to be appended and stored.

The concept of the attack through the firmware in hard drives that is executed as a computer executes the bootup via BIOS or EFI can be expanded to any device including graphic cards that have code executed during initialisation.

When firmware with an autorun capability is combined with the universal plug and play capability that operating systems have, it is surprising they are not more attacks are not executed through this attack vector.

Think about the all the devices that use that have firmware within your network, or by your users that connect devices to the network. Can you be sure that they are not introducing malware into organisation? Consider a firewall or a multifunction device containing a worm that infects a network when it is plugged in or restarted, the possibilities and permutations are limited by ingenuity.