Sunday, 10 May 2015

Root Servers

When discussing networking and how the internet works as part of some of the courses I deliver the topic of DNS comes up as a security risk. A question asked is whether the root servers could be taken offline by a DDoS attack. There have been attempts at doing this and the two most notables ones are:-

October 21, 2002

On October 21, 2002 an attack lasting for approximately one hour was targeted at all 13 DNS root name servers. The attackers sent many ICMP pings using a botnet to each of the servers. However, because the servers were protected by packet filters which were configured to block all ICMP pings, they did not sustain much damage and there was little to no impact on Internet users.

February 6, 2007

On February 6, 2007 an attack began at 10 AM UTC and lasted twenty-four hours. At least two of the root servers (G-ROOT and L-ROOT) reportedly "suffered badly" while two others (F-ROOT and M-ROOT) "experienced heavy traffic". The latter two servers largely contained the damage by distributing requests to other root server instances with anycast addressing. ICANN published a formal analysis shortly after the event

A DDoS attack may of been possible in the early days of the internet however the resilience and security that have been put in place since then would make it unlikely, unless the biggest ever attack ever seen on the internet was conducted

Root servers resolve the top level domains (TLD) such as .uk, .com or .xxx and are critical to the operation of DNS. According to the Root Server Technical Operations Site there are 13 critical servers with multiple instances of each server using anycast addressing to distribute them around the world.

Root Server Operater Instances
A Verisign, Inc. 5
B Information Sciences Institute 1
C Cogent Communications 8
D University of Maryland 69
E NASA Ames Research Center 12
F Internet Systems Consortium, Inc. 58
G U.S. DOD Network Information Center 6
H U.S. Army Research Lab 2
I Netnod 49
J Verisign, Inc. 81
M WIDE Project 7
13 Servers 12 Operators 465 Instances

Location of root servers worldwide

Sunday, 15 March 2015

Shadow IT: Centralised vs distributed IT Management

Historical when computers were first introduced into a company it was through individual departments, typically Finance and they purchased their own computer systems. As the usage of computers grew and IT become first a support function and then a core part of the business. The structure of organisations changed with as they introduced an IT department which managed IT and ensured commonality across the whole of the organisation. The governance of IT become centralised within the IT department.

IT Governance is a subset of corporate governance, focused on information and the technology and the performance and risk management around the handling of information and the technology. It is how organisations align their IT strategy with business mission, ensuring they stay on track to achieve their strategies and goals. 

The use of IT has continued to mature throughout organisations and IT has become a platform or service on top of which the functions of the company are built. If you examine an organisation today there is a core platform of servers, workstations and networks which underpin the finance systems, sales, marketing, production and other activities. Each of these activities has different requirements and expertise. IT decision on spending is becoming dispersed throughout organisations, according to a survey conducted by BT of 1,000 IT "decision-makers". This has been backed by research by Garner which estimates by 2020, 35% of organisations’ technology budget will be spent outside the IT department.

This is creating “shadow IT”, and has been given impetus by the growth of consumer technology and cloud computing, which make it increasingly easy to deploy technology without going through the corporate IT department. With businesses under pressure to innovative, flexible and adaptive it has been realised they can often deploy solutions more rapidly by bypassing the IT department. BT’s study showed nearly three-quarters of respondents say they are more concerned about security with the move to a more distributed approach to IT. The various departments are very keen to purchase and deploy IT based solutions however they don’t want to support them or take responsibility for them working and are happy for central IT to provide this function.

Ensuring that shadow IT is subject to proper governance is a challenging task for CIOs. Part of the solution is by supporting the business in meeting its objectives by liaising with all parts of the business. They are the experts on what they need; they need support on ensuring the requirements can be met within the corporate governance framework. Shadow IT should not be considered a problem but should be adopted as part of a distributed IT function.

Friday, 13 March 2015

Shadow IT – what are the risks?

Increasingly within organisations a shadow organisation is building up and will threaten the security of the overall organisation. This is not the mafia or a criminal sub culture, but an alternative to the organisations IT department.

Citizen Programmers + Rogue Devices + BYOD + Tech Savvy Employees = Shadow IT

Increasing, as the workforce becomes more tech savvy as the millennium generation are starting to become predominant as employees. Each department has its own group of geeks that the rest of the department turn to as first line of support. I have seen this everywhere I have worked, people like myself are asked questions or asked to fix things as we are immediately available and often understand IT and the business function and give advice quicker and trusted more than IT support who can live up to the reputation of the IT Crowd and associated with the phase “Turn it off and turn it on again”

In the 21st century business are increasingly facing employees who are “citizen programmers” where they have developed their own applications with macro programming languages in a lot of business software to manipulate raw data and draw useful information and reports. Citizen programmers can generate applications that become mission critical in the way they draw useful information from the organisation’s data. These applications are outside the control of IT and often not known to those doing the BC&DR activities.

The tech savvy employees and often those less technical aware are bring consumer technology into the office either as part of BYOD or often as rogue devices that IT and the organisation know nothing about. These can introduce a range of attack vectors that the organisation may not be aware of and unable to put appropriate controls in. I have seen employees set-up Google remote desktop to allow remote access to their workstation so they can be more productive out of the office and IT have not been aware of this remote access channel.

So what are the risks of this shadow IT within your organisation?

  • No governance of the activities
  • Lack of security awareness and alignment with business mission
  • Increased risk of data leakage
  • Increased attack surface area
  • Dependence on unknown and uncontrolled applications

What can be done, IT like cyber security needs to be aligned with the business needs and this requires better integration with the end users to ensure they can do their jobs in a secure manner that does not affect productivity, allow initiative and innovation but does not impact on security which is the triad of confidentiality, the integrity and availability of assets.

Thursday, 12 March 2015

Forthcoming talk

Hacking the Internet of Things (IoT)

Thursday 14 May 2015

8.00pm at the offices of Sopra Steria, Hemel Hempstead, HP2 7AH

The IoT is a paradigm of how devices are now interconnected by various media to each other locally and across the Internet, allowing them to exchange information or to interact with us. You can control the heating in your home from a smartphone or monitor the hundreds of buoys free floating in ocean currents. IoT has great potential for aiding both us and malicious activities. This talk discusses the IoT and its potentials, followed by discussions and demonstrations of how the IoT can be hacked to reveal details of our interactions or take control of the environment around us.

It includes a demonstration of RFID can be compromised by looking at an attack on a RFID based door access controller.

RFID Cloner

RFID Door COntroller
The event is being organised by the Hertfordshire branch of the BCS, details of the talk are on their event page

If you wish to attend this meeting, please would you book your places using this booking link.

Tuesday, 10 March 2015

What is phone hacking?

Phone hacking according Q762 on the ask the police website (  is where people gain unauthorised access to information that is held on a mobile telephone, in most cases these are voicemail messages. It goes onto explain that mobile phone companies set up a default voice mail service for all mobile telephones. This service can then be accessed from other telephones (both mobile and land-line) by dialling your mobile telephone number. Once the voicemail service message begins, all a hacker has do is dial * and enter a PIN number, which is a default PIN number unless it has been changed. It is this type of hacking that the newspapers in the UK have been accused and admitted to doing. This type of hacking can be stopped by changing the default PIN and not giving the PIN to anyone.

However phone hacking is more than this simple example of almost social engineering, for example I would identify the following as phone hacking activities

  • Phreaking
  • VoIP hacking
  • Voice mail hacking
  • Mobile phone network hacking
  • Insecure wifi usage
  • Smart phone app security

All of these can result in an unintended opportunity, ranging from free phone calls to intercepting and retrieving information.

  • Phreaking involved manipulating the plain old telephone system that used to tones to control switching and functionality. By reverse engineering the tomes pheakers could route long distance calls for example. 
  • VoIP involves the transfer of voice within the data packets on an internet protocol (IP) network. The hacking of VoIP allows eavesdropping, control of VoIP based private branch exchanges (PBX), the routing of phone calls and other activities.
  • Voice mail hacking allows the retrieval of voice messages often by using default PIN numbers
  • Mobile phone networks use a number of telecommunication protocols that have been hacked allowing interception of mobile phone calls and other malicious activities
  • A lot of mobile devices including phones can make use of WiFi networks and in some instances route phone calls over WiFi connections using VoIP and related technologies. WiFi is difficult to secure and data can be intercepted.
  • The top of the range phones now all come with apps, insecure doing practice and in cases malicious programming allows data leakage from phones due to the vulnerabilities in apps installed on the phone, or the apps can take control of the phone causing it to make premium rate connections via voice, data and sms.

I will be looking at some of the phone hacking techniques and countermeasures over the next few months as I prepare a talk on the topic.

Sunday, 1 March 2015

PCI DSS: outsourced eCommerce

A presence on the internet is considered essential for business; the UK government have a digital inclusion policy to get SME's online and being part of the digital economy. However for many small companies go online and taking payments for services online is new and uncharted territory.

Many companies don't appreciate the governance around trading within the digital economy with issues such as the Payment Card Industry Data Security Standard (PCI DSS) and distance trading regulations part of wide range of regulations, standards and requirements that a company must get to grips with.


The PCI DSS was initiated by the Payment brands (VISA, Mastercard, American Express, Discover and JCB) to combine their individual security requirements into a single set of requirements. The standard is developed by the PCI Security Standard Council (SSC). It contains mandatory requirements for the storing, processing or transmission of cardholder data and includes anything that might affect the storing, processing or transmission of cardholder data. Merchants who receive payments from payment card from the 5 brands are responsible for ensuring the payment collection process is compliant to the standard. Merchants cannot delegate the accountability, even if all payment process is done by 3rd parties the Merchant still is subject to requirements of ensuring the 3rd parties are compliant,

One of the pit falls I came across when advising companies about the PCI DSS occurs when they have already got an eCommerce presence online before attempting to gain PCI DSS certification and their existing eCommerce operation is not compliant with the requirements of the standard.

For example, they have a website designed, hosted and managed by 3rd parties to card payment online rather than do it themselves; this is a good option form many companies as they may not have the expertise. However they find that instead of it being an easy process, it has become very difficult due to the use of suppliers that are not compliant to the PCI DSS requirements.

Outsourced eCommerce Compliance

For this type of situation of outsourced eCommerce; for companies not meeting the level 1 merchant status; there is a cutdown version of the questionnaire know as Self-Assessment Questionnaire SAQ A "Card-not-present Merchants, All Cardholder Data Functions Fully Outsourced". It was been developed by the SSC to address requirements applicable to merchants whose cardholder data functions are completely outsourced to validated third parties, where the merchant retains only paper reports or receipts with cardholder data.

The eligibility criteria for completing a SAQ A is given with the document; however the critical point is cardholder data functions are completely outsourced to validated third parties. Validated parties means the service providers must be PCI DSS Compliant for the services they deliver and this includes the following services.

  • Website Design
  • Physical Hosting
  • Managed Hosting
  • Payment processing

There is a distinction between being certified for being a merchant and being certified for services offered. A service provider will have an Attestation of Compliance (AoC) for either a RoC or a SAQ D for service providers where the AoC will state the services being covered.

Some companies get caught out because their service provider is certified as a Merchant for taking payment and may not have the service being offered covered by the certification.

For example; you could pay for the creation and hosting of a website from a website design company that take payment by credit card. They may have outsourced their eCommerce operation and completed a SAQ A themselves. When asked for evidence of compliance, they may offer the SAQ A as proof of certification, but this only covers their merchant activity and not their software development and hosting services. They should have an SAQ D for service providers to prove their services are compliant and present the AoC for this when requested.

The Problem

In my experience companies get caught out by having a website designed and hosted and then find they have to be compliant to the PCI DSS when their acquiring bank asks for a SAQ to be completed. At this point they find out that their suppliers are not PCI DSS compliant for the services contracted and also they don't have sufficient information to complete a SAQ D; which is the self-assessed version of the full set of requirements, as they don't have control over the hosting or management of the website.

This leaves them in the situation where they have been asked by their acquiring bank to demonstrate compliance and they are unable to meet the request.

The options are

  • Ask the suppliers to become compliant
  • Audit the suppliers as part of the companies compliance
  • Change to a certified supplier

None of these options are attractive or easy to complete. Whilst a company is non-compliant they could be fined by the acquiring bank monthly, pay additional transaction costs or in extreme cases have the ability to process payment cards removed.


My advice for companies thinking about starting an eCommerce operation is to contact an expert in the PCI DSS and get advice on the standard before actually implementing the website. This can save a lot of hassle, time and money in the long term,

There should also be more effort by governments, acquiring banks, payment brands and payment processors to makes sure those new to online payments can get the right advice.

Saturday, 28 February 2015

Security Threats

Some of the security threats faced by businesses are shown on diagram below that I drew a number of years ago. Although still relevant I'm aiming to update it.

How many do you face on a daily basis,

Thursday, 26 February 2015

What is SSL

SSL has been in the news over the last year for having a number of high profile vulnerabilities, but outside of the world of the encryption specialist the understanding of what it does is limited. This is fine as security tools such as SSL & TLS are suppose to be transparent to the end user.

What is SSL

The Secure Socket Layer (SSL) was a proprietary security protocol developed by Netscape, only versions 2 and 3 have been widely used. Version 2.0 was released in February 1995 and version 3.0, released in 1996. The 1996 draft of SSL 3.0 was published by IETF as a historical document in RFC 6101

The protocol is based on a socket which is the address of a service and consist of the internet protocol address, the port number and protocol being used.

Hence the following are two distinct sockets

  • UPD
  • TCP

As the SSL suggest this is a form of security based around the socket on a server. Typically we expect to see SSL on secure website indicated by HTTPS in the URL (address) bar and the padlock.

Secure HTTP typically uses socket TCP 443, whilst plaintext HTTP will use socket TCP 80.

A key component from a user's prospective is the digital certificate associated with the SSL connection.

To create an SSL certificate for a web server requires a number of steps to be completed, a simplified

  1. Generation of identity information including the private/public keys
  2. Creation of a Certificate Signing Request (CSR) which includes only the public key
  3. The CSR is sent to a CA who validates the identity
  4. CA generates a SSL certificate that is then installed on the server

The process of proving the identity of a server is cover by my blog on PKI.
 A server will require to have  a number of cipher suits installed which allow it to negotiate to use a common cipher that is available on both the client and server. A cipher suite consists of a number of parts  
  • a key exchange algorithm
  • a bulk encryption algorithm
  • a message authentication code (MAC) algorithm
  • a pseudorandom function (PRF)

 Examples of some cipher suites that might be found on a server are  
  • SSL_RC4_128_WITH_MD5
For a security service to be considered secure it should support only strong cryptographic algorithms such as those defined in the PCI DSS glossary v3
  • AES (128 bits and higher)
  • TDES (minimum triple-length keys)
  • RSA (2048 bits and higher)
  • ECC (160 bits and higher)
  • ElGamal (2048 bits and higher)

For additional on cryptographic key strengths and algorithms. information see NIST Special Publication 800-57 Part 1 (

SSL Handshake

The steps involved in the SSL handshake are as follows
  1. The client sends the server the client's SSL version number, cipher settings, session-specific data, and other information that the server needs to communicate with the client using SSL.
  2. The server sends the client the server's SSL version number, cipher settings, session-specific data, and other information that the client needs to communicate with the server over SSL. The server also sends its own certificate, and if the client is requesting a server resource that requires client authentication, the server requests the client's certificate.
  3. The client uses the information sent by the server to authenticate the server. If the server cannot be authenticated, the user is warned of the problem and informed that an encrypted and authenticated connection cannot be established.
  4. Using all data generated in the handshake thus far, the client creates the pre-master secret for the session, encrypts it with the server's public key (obtained from the server's certificate, sent in step 2), and then sends the encrypted pre-master secret to the server.
  5. Both the client and the server use the master secret to generate the session keys, which are symmetric keys used to encrypt and decrypt information exchanged during the SSL session and to verify its integrity.
  6. The client sends a message to the server informing it that future messages from the client will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the client portion of the handshake is finished.
  7. The server sends a message to the client informing it that future messages from the server will be encrypted with the session key. It then sends a separate (encrypted) message indicating that the server portion of the handshake is finished.
  8. The SSL handshake is now complete and the session begins. The client and the server use the session keys to encrypt and decrypt the data they send to each other and to validate its integrity.
  9. This is the normal operation condition of the secure channel. At any time, due to internal or external stimulus (either automation or user intervention), either side may renegotiate the connection, in which case, the process repeats itself.

Attacks on SSL

The SSL 3.0 cipher suites have a weaker key derivation process; half of the master key that is established is fully dependent on the MD5 hash function, which is not resistant to collisions and is, therefore, not considered secure. In October 2014, the vulnerability in the design of SSL 3.0 has been reported, which makes CBC mode of operation with SSL 3.0 vulnerable to the padding attack

Renegotiation attack

A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSL 3.0 and all current versions of TLS. It allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server

Version rollback attacks

an attacker may succeed in influencing the cipher suite selection in an attempt to downgrade the cipher suite strength, to use either a weaker symmetric encryption algorithm or a weaker key exchange. In 2012 it was demonstrated that some extensions to the orginal protoclas are at risk: in certain circumstances it could allow an attacker to recover the encryption keys offline and to access the encrypted data

POODLE attack

On October 14, 2014, Google researchers published a vulnerability in the design of SSL 3.0, which makes cipher block chaining (CBC) mode of operation with SSL 3.0 vulnerable to the padding attack (CVE-2014-3566). They named this attack POODLE (Padding Oracle On Downgraded Legacy Encryption).

BERserk attack

On September 29, 2014 a variant of Daniel Bleichenbacher’s PKCS#1 v1.5 RSA Signature Forgery vulnerability was announced by Intel Security: Advanced Threat Research. This attack, dubbed BERserk, is a result of incomplete ASN.1 length decoding of public key signatures in some SSL implementations, and allows a MITM attack by forging a public key signature.

Heartbleed Bug

The Heartbleed bug is a serious vulnerability in the popular OpenSSL cryptographic software library, affecting versions 1.0.1 to 1.0.1f. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the data payloads. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

Friday, 20 February 2015

Trust within the PKI

The Public Key Infrastructure is the backbone of eCommerce on the Internet, however it is something that most users don't understand.

Starting from a general user prospective: when doing anything that requires confidentiality of information they are told to look for the padlock in the browser and check for https at the start of the URL. There are told to trust the padlock.

What the padlock means is two fold:

  • The web server has had its identity authenticated
  • The communication with the server is protected

Here I want to discuss what does it mean when we are told the web server has been authenticated. Well authentication is:-
  1. Authentication is the process of determining whether someone or something is, in fact, who or what it is declared to be.
With a web server its identity will be authenticated by the issuer of the digital certificate that has been installed on the server.

The owner (admin) of the web server has gone to a certification authority (CA) and proved the identity of the server to the satisfaction of the CA. The CA who can who can issue digital certificates once they are happy with the identity will issue a digital certificate with the name of the server on it, the public key of the server and a signature of the CA. The signature is a block of text that has been encrypted with the private key of the issuing CA.

In English, we trust the identity of the server based on a 3rd party verifying the identity of the server to an indeterminable standard. The 3rd party (the issuing CA) is an organisation we likely know nothing about has used a process to determine the identity that we know nothing about. The issuing CA could of done a simple check such as the requester responded to an email sent to an address on the application or it could be a detail check of identity such as papers of incorporation of an organisation, financial background etc.

Basically we become very trusting because someone who we don't know has said the web server is trustworthy. This seems very bad on the facing of it, however with the PKI we actually have a method of getting some assurances; although it does rely on trusting someone at some point.

With the PKI, there are root CA's organisations that through checks based on the IETF and ITU recommendations on PKI infrastructure have proved their identity to operating system and browser vendors and consequently have had their public key included in the operating system and browser. Each root CA then performs check on child CA's to the recommendations to verify their identity. This continues with each CA in the chain verify the next link until the CA that issued a certificate to the web server is verified by its parent CA creating a tree hierarchy. Thus the chain of trust is built up.

Back to the certificates: a public key on a certificate can allow a message encoded with the public key's corresponding private key to be read. This way the public key of the root CA certificates in the browser can be used to read the signature on their child CA issued certificates. This way each certificate issued by the chain can be verified and authenticated, meaning the certificate issued to the web server can be verified and the identity of the server can be authenticated.

PKI Trust chain

So providing the trust model is upheld with no certificates issued without the identity of the receiving entity being checked thoroughly, we can rely on the authentication of the entity. There lies the problem, we need robustness in the PKI to prevent fraudulent certifications issued or CA's being hacked and certifications issued without proper authorisation.

Thursday, 19 February 2015

Internet connect toys: what is the risk?

It has been announced that Mattel is partnering with US start-up ToyTalk to develop Hello Barbie ( , which will have two-way conversations with children using speech-recognition platform connected across the Internet.

Initial thoughts were; is this hackable, and could it be used to abuse or groom children. Embedded platforms are often limited in the ability to deploy strong countermeasures such as encryption due to limited processing power; despite modern processors becoming more powerful, and they are difficult to update if a vulnerability is discovered.

A similar toy Vivid Toy’s ‘Cayla’ Talking Doll has been found to be vulnerable to hacking by security researcher Ken Munro, from Pen Test Partners (

The risk to abuse children through insecure internet connect devices was demonstrated by baby monitor being hack and abuse been shouted at a baby ( The device from Foscam has been patched with the update available, but a search of Shodan shows device still susceptible after the patch was made available as deployed devices where not updated.

The toy will listen to the child's conversation and adapt to it over time according to the manufacturer - so, for instance, if a child mentions that they like to dance, the doll may refer to this in a future chat. The device relies on a WiFi connection and the speech is likely to be processed by a remote facility. Listening devices have already proved to be controversial with privacy as demonstrated by Samsung ( The doll is not likely to have a sophisticated control panel so will the connection to WiFi be through WiFi Protected Setup (WPS), this has already been the subject of attacks (

Could the toy be used for abuse, I think the answer is that someone will learn how to attack the toy, whether it will be used in real life is another matter but the impact if it was is such that I hope security has been designed in from the start.

What could be done?

Mattel have not given many details on the security measures deployed to protect the toy and information recorded. It could be considered that the information is PII and therefore subject to the data protection laws.

The question is should there be a mandatory level of protection that manufacturers will have to meet to protect internet connected devices if they are to be used by children or potential pick up sensitive information. Should a kitemark scheme be introduced to show the manufacturer has meet the minimum levels of due care and due diligence?