Thursday 20 December 2012

Plastic credit – who’s fault (a potted history of the payment card)


When you are recovering from the expenditure over the holiday period and you receive the credit card bill you may wonder whose fault it is for the plastic we use to make purchases.

Credit in various forms and mechanisms has been around since the early days of people conducting transactions. However it is widely regarded that the inventor of the first bank issued credit card was John Biggins of the Flatbush National Bank of Brooklyn in New York. In 1946, Biggins invented the "Charge-It" program between bank customers and local merchants.

In 1950, the Diners Club issued their credit card in the United States. The Diners Club credit card was invented by Diners' Club founder Frank McNamara and it was intended to pay restaurant bills. A customer could eat without cash at any restaurant that would accept Diners' Club credit cards. Diners' Club would pay the restaurant and the credit card holder would repay Diners' Club. The Diners Club card was at first technically a charge card rather than a credit card since the customer had to repay the entire amount when billed by Diners Club.

American Express issued their first credit card in 1958. Bank of America issued the BankAmericard bank credit card later in 1958.

By 1959 many financial institutions had begun credit programs. Simultaneously, card issuers were offering the added services of “revolving credit.” This gave the cardholder the choice to either pay off their balance or maintain a balance and pay a finance charge.

During the 1960s, many banks joined together and formed “Card Associations,” a new concept with the ability to exchange information of credit card transactions; otherwise known as interchange. The associations established rules for authorization, clearing and settlement as well as the rates that banks were entitled to charge for each transaction. They also handled marketing, security, and legal aspects of running the organization.

The two most well-known card associations were: National Bankamericard and Mastercharge which eventually became Visa and MasterCard.

By 1979, electronic processing was progressing. Dial-up terminals and magnetic strips on the back of credit cards were introduced thus enabling retailers to swipe the customer’s credit card through the electronic terminal. These terminals were able to access the issuing bank card holder information. This new technology gave authorizations and processed settlement agreements in a matter of 1—2 minutes.

Track 1 of the magnetic strips was designed to hold 79 characters, one less than the 80 columns on punch cards; the track 2 used 40 characters which contained the essential data required for a transaction and provide faster communication through dial up modems by reducing the amount of data sent.

The creation of chip and pin (the name given to the initiative in the UK) came about from the concept of smartcard technology, which was developed in the USA, and used an embedded microchip which was accessed through a 4 digit Personal Identification Number (PIN) to authorise payments.  Users no longer had to hand over their cards to a clerk, instead they inserted them into a Chip and Pin terminal, where they entered their unique 4 digit code, a message would be sent to an acquirer, via a modem, where they would check if the pin was correct, and if so, authorise the transaction.

In 1990 France introduced the use of chip and PIN cards based upon the French only B0’ standard. (For French domestic use only) and has cut card fraud by more than 80%

The technology was trialled in the UK starting in May 2003 in Northampton, its success there and high frequency of debit card users at the time, allowed HBOS to introduce the first cashback scheme. It was rolled out in the UK in 2004 to be the dominant way for the public to pay for items with the advertising slogan of ‘safety in numbers’ highlighting the personalised number aspect of the system. More than 100 countries use the technology online and offline, establishing its appearance as the dominant form of payment and being utilised by people all over the world.

References

http://www.fidelitypayment.com/resources/what_are_merchant_services
http://wiki.answers.com/Q/Who_is_John_Biggins_of_the_Flatbush_National_bank_in_Brooklyn_NY
http://www.cardsave.net/blog/the-history-of-chip-and-pin/
http://en.wikipedia.org/wiki/Chip_and_PIN
http://www.theukcardsassociation.org.uk/Advice_and_links/index.asp

Wednesday 19 December 2012

November ADSL Router Analysis

Analysis of the logs files from my ADSL router for November, the level of events was similar to October and the USA again was the main source of events, China has not been the detected source IP since Aug. Turkey has been consistent through out the year so far.


The detected events broke down country wise as follows

CountrySource IPsNo of attack from country
USA220
Turkey1616
Germany16
Azerbaijan11

October ADSL Router Analysis

Analysis of the logs files from my ADSL router for October, there was an increase compared to September, however detected events did not meet the levels of July.


The detected events broke down country wise as follows

CountrySource IPsNo of attack from country
USA629
Turkey1616
Saudi Arabia112
Germany16

Thursday 13 December 2012

Vulnerability scans & false positives: The importance of sanitising input

One of the contentious issues from vulnerability scanning, in particular with web applications, is false positives. This means that the scanning indicates vulnerabilities exist, whereas in reality there is no vulnerability, it is a response to the scanning that has been incorrectly classified. These are type I errors - a result that indicates that a given condition is present when actually it is not.

A type II error is where a condition is present but not indicated as being so. This type of error is a false negative and is often a worse condition as there is a false sense of security generated since vulnerabilities where not identified.

Web applications are frequently vulnerable to injection exploits. These techniques involve various vulnerabilities to SQL injection, Cookie manipulation, command injection, HTML injection, Cross-site scripting and so on.

To prevent injection it is recommended that the application uses a combination of parameterised statements, escaping, pattern checking, database permissions and input validation are used. Additionally the application should also be checking input to ensure that input values are within range and unexpected values are handled in a consistent manner. Generated error messages should not give away information that may be useful to an attacker but should help a user enter the required input, improving the user experience of the web site.

The testing for injection vulnerabilities is done by modifying GET and POST requests as well as Cookies and other persistent data storage, by changing the content to inject a piece of additional code before sending the modified requests to the web application the scanner attempts an injection attack. The additional code can represent a SQL command, HTML code or OS commands dependent on the attack being simulated.

The scanner then examines the responses back for the web application to determine if the injection attempt has succeeded by looking for evidence of a successful execution of the code. This evidence can be a delay in the return of the response, the response including the injected input within it which the browser interprets as HTML code, an error message is detected or data that has been retrieved by the simulated attack.

ASV and vulnerability scans generate a large number of reactions when testing for various injection techniques. These reactions can indicate a vulnerability exists or be a false positive.

Often the scanning detects that the response has the injected code embedded within it even though the injected code failed to execute in the intended manner. The automated software includes these as a result in its report. In reality, these results are false positives in that the attempt did fail, however they also indicate that the inputs to the application have not been sanitised to ensure only the expected range of inputs are processed by the application. What has happened is the modified input has passed through the application and been included in the response by the web server back to the vulnerability scan engine without being filtered out.

Although these false positives can be ignored they show that the application is not sanitising variables and values and is allowing a potential vector for future unknown attacks to occur. Eliminating an attack vector by properly sanitising variables, cookies and other forms of persistent data within a web application environment will help protect against attacks in the future.

The advantage of having an application that correctly sanitises the input is that the number of false positives detected during vulnerability scanning is reduced; noise that may be masking a true vulnerability is removed and the need to argue which results are false positives is reduced, especially if ASV scans were being conducted.

A disadvantage of not sanitising the input is often blocks of results are classed as false positives rather than examining hundreds of results and occasionally this means a true result is incorrectly classed as false positive creating a type II error. The incorrectly identification of a positive response as a false positive is worse than including all the responses to a test as it gives a false sense of assurance if vulnerabilities do not appear to have been identified.

When attempting to manually examine the results from automated testing to identify false positives, additional problems are encountered. Vulnerability scanners use their own engines to generate HTTP requests and analyse responses. Then trying to emulate the testing using browsers the current generation are equipped with technology that is designed to reduce attack vectors by filtering the responses sent and the requests received. Browsers such as IE will detect and block attempts at cross-site scripting, IE has improved since V6 and the latest versions prevent traversal attacks etc from the URL box in the browser.

The browser behaviour requires the use of additional tools to manual test for vulnerabilities, tools such as web proxy like web scarab or burpsuite are used to intercept the request objects from the browser, allowing modification before sending them onto the server. The proxy also allows interception of the response objects before they reach the browser, allowing the response to be examined often at a HTML source code level rather than allowing a browser to interpret the response and display it in their pane with filtering out of malicious scripts etc being done by the browser.
Even with just a reasonable size website, there can be hundreds of results from the testing.

Eeliminating the generation of responses, especially false ones by correctly sanitising the input to the application will make the scanning and reporting more efficient and reduce the time spent on false positives in the future. An organisation that looks at what is causing the generation of false positive response to a test scenario and eliminates the causes rather than ignoring the false response will be improving their security posture and making scanning more efficient. They will be reducing the chance of a vulnerability being ignored as it was lost in the noise or wrongly classified as being false.

In summary, it is import to ensure a web application correctly sanitises the input to reduce the production of false positives and improve the effectiveness of vulnerability scanning by reducing noise which masks important results.

Wednesday 12 December 2012

Catching Insiders

I have discussed the insider threat a number of times and recently came across this article on the Dark Reading Website Five Habits Of Companies That Catch Insiders - Dark Reading which discusses the controls or habits that will aid in catching insiders.

The report Insider Threat Study: Illicit Cyber Activity Involving Fraud in the U.S. Financial Services Sector this article was based on made a number of recommendations which I have listed here


Behavioral and/or Business Process

  • Clearly document and consistently enforce policies and controls.
  • Institute periodic security awareness training for all employees.

Monitoring and Technical

  • Include unexplained financial gain in any periodic reinvestigations of employees.
  • Log, monitor, and audit employee online actions.
  • Pay special attention to those in special positions of trust and authority with relatively easy ability to perpetrate high value crimes (e.g., accountants and managers).
  • Restrict access to PII.
  • Develop an insider incident response plan to control the damage from malicious insider activity, assist in the investigative process, and incorporate lessons learned to continually improve the plan
I do recommend reading the article and the report to gain a better understanding of the controls that reduce the insider threat.

A New Year BYOD hangover for employers

Researchers have released news today of a zero day attack on the Samsung Smart 3D LED TV, whilst wondering how many of these will be unwrapped and installed over the Christmas holiday period that may be susceptible to this form of attack my thoughts turned to Information Security professionals who must surely be wondering what brand new gadgets employees will bringing into the organisation when they return to work after the holidays. Research has shown that employees, especially the younger ones will use their own devices (BYOD) and the cloud services such as dropbox even if the organisations they work for have policies banning such activities.

Every organisation should be considering policies for the use of BYOD within their environment and need to bear in mind that restrictive polices often fail if employees; from the senior level downwards; feel the policies interfere with doing their job and can’t see the implication of their actions on the security and governance of their employers business will continue with unsanctioned behaviour as they try and meet deadlines.

Organisations need to have well thought out policies and have in places procedures for implementing them, employees need to be informed of and frequently refreshed about the policies and implications to the organisation of breeches to information security as part of a continual information security education programme.

Policies on the use of BYOD should outline the privacy issues affecting both the owner of the equipment and the employer; it should cover the privacy the employee should expect from connecting their device to the corporate systems. Another important section of the policy is that it should cover what happens when a device is lost or upgraded. Requirements for notifying the IT department about such circumstances need to be included, the possibility needs to be considered if it is not possible to wipe the corporate data only then the whole device could be wiped losing all data for the employee.

The employee would need to agree to the policy before being able to use their own devices. There is often an advantage of allowing employees to use their own devices in terms of improved productivity, reduced expenditure; however there are costs and negative implications to both the employee and the employer.
Topics to be covered by a policy include
  • Device Selection
  • Encryption
  • Authentication
  • Remote Wipe Capabilities
  • Incident Management
  • Control Third-Party Apps
  • Network Access Controls
  • Intrusion Prevention / Detection Software (IPS/IDS)
  • Anti Virus - AV
  • Connectivity (Bluetooth/Wifi mobile hotspot)
It is not possible for an organisation to be able to support all devices on the market; therefore it may be necessary to limit allowed devices to a subset of those available. Selection of those devices will be a contested decision with various camps complaining their favourite manufacture or OS is not included. Ensuring the list is circulated to employees and reviewing the supported devices on a regularly basis will help alleviate device selection problems.

There are a large number of technical solutions that are available; however the selected solution should support the organisations aims and mission, within the selection process as with the policy generation it may be necessary to seek expert opinion.

There is no reason why the use of BYOD within the organisation cannot be allowed, giving greater flexibility to employees with improved productivity in a controlled environment that will protect the organisation. This is far better than having employees using their own devices in an uncontrolled manner and possible in an unknown manner leaving an organisation vulnerable to a problem they are not aware of. Having a policy that supports employees makes it easier to have sanctions for those who do not comply, no policy allows a situation where there is no control and a restrictive policy will often force employees to use their devices on the quiet.

Thursday 6 December 2012

Data Protection & the EU

As part of the legal domain on the CISSP course I discussed with the class yesterday about the Data Protection requirements and how the EU data protection maps closely with the OCED data privacy requirements. We also discussed the situation over the transferring data to the US from the EU and the need for organisations in the US to sign up for and stay signatures of the Department of Commerce Safe Harbour agreement and whether the US Patriots act trumps the safe harbour agreement and EU companies should consider whether it is prudent to transfer PII to the US under the safe harbour if the government can read the data.

Today I find two interesting articles about this
Neither are giving a nice rosy feeling that there is a solution to the problem or there will be one in the near future.

Wednesday 5 December 2012

Vulnerability Disclosure

I am delivering CISSP training this week and today we were discussing software development security in class, during which we discussed the role vulnerability researcher have and the affect of vulnerability disclosure with Proof of Concept has on the development on Malware. As is often the case when I am delivering the training I find in the InfoSec news feeds a relevant story to the topics in the domains of the CISSP, today I found "Exploit kit authors thrive due to PoC code released by whitehats" published by Help Net Security discussing exactly the same point as was giving weight to the discussions we had in class.

Insider Threat hits Swiss Spy Agency

In the news today "Swiss spy agency warns CIA, MI6 over 'massive' secret data theft" a disgruntled employee steals terabytes of data. The employee become disenfranchised after being ignored about warning to his employers about the operation of systems. With his admin rights given access to a lot data he downloaded it onto portable hard drives and walked out of the building.

One needs to question did he have the "need to know" to all the data systems, why was it possible to download vast quantities of data to portable drives. Did security not check employees leaving the building. Was their adequate supervision of those with elevated privileges.

There are a number of controls that should of been in place I suspect some will now be put in place.

Wednesday 28 November 2012

Tools Update (28th Nov)

My slightly irregular update at the moment on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/, http://tools.hackerjournals.com/

Nessus 5.0.2 available
http://www.tenable.com/products/nessus/select-your-operating-system
This release is a bugfix only release
Nessus is a vulnerability and configuration assessment product. Nessus features high-speed discovery, configuration auditing, asset profiling, sensitive data discovery, patch management integration, and vulnerability analysis of your security posture.


Wireshark 1.8.4
http://www.wireshark.org/download.html
Wireshark 1.8.4 has been released. Installers for Windows, Mac OS X 10.5.5 and above (Intel and PPC), and source code is now available.

Will be trying to produce this update more regularly in December

Friday 23 November 2012

Information security align with business needs

Just found a good article on the SC Magazine website about "Game on: Case study with Electronic Arts and Allgress" which discusses challenges around protecting EA network. However there is a quote in the article.

"In today's world, security executives need to be able to align their investments with business goals and be able to show that there is some sort of return – be it risk reduction, business enablement and or financial savings," says Borrero, who previously led security and risk management strategy at Pacific Gas and Electric and served a CISO role at Robert Half International, a global staffing firm.

This quote highlights one of the points I try and get across on security training about information security needs to be aligned with the needs of the business and it must be an enabler not a disabler of the business meeting its mission. It will be a quote I will be point to when I run my next training courses on the CISSP.



Wednesday 21 November 2012

PCI DSS


This is a copy of a blog I wrote for IT Governance, where I will be publishing a series of blog entries that will be looking at the PCI DSS and the 12 requirements contained within it; the series is starting by looking at the scoping of the cardholder data.

This is really a general overview of the standard and its requirements.

Compliance with PCI DSS should be considered the minimal level of security that should be implemented and does not ensure that an organisation is secure; however compliance should ensure that an organisation has in place the procedures, policies and work practices that will reduce the possibility of a cardholder data breach and improve the effectiveness of an investigation of a breach. It is important an organisation that is PCI DSS compliant maintains the compliance at all times.

Within the PCI DSS standard it refers to a ‘trusted’ network, which is the environment within which the cardholder’s data is identified as being either processed, stored or transmitted. A key point about the standard and its related standards and guidelines is that they have been written to protect cardholder data as it is processed, stored or transmitted and not necessarily to protect the entire infrastructure of an organisation from all security threats as it is a very specifically focused standard.

For an organisation the best methodology for making the process of implementing and gaining compliance with the PCI DSS standard easier, is to reduce the scope of the implementation, which is the size of the environment involved with cardholder data. This can be achieved by reducing the footprint of cardholder data within your organisation’s environment by analysis of where cardholder data is stored, processed or transmitted and reducing the locations within the environment where it is handled, segregating the cardholder data from the rest of the organisations environment, creating a trusted environment and a non-trusted environment reduces the scope for implementing the standard. In fact if it is possible to completely remove cardholder data from the environment which will make the environment become completely out of scope, however if 3rd parties are handling cardholder data for you, the responsibility for ensuring they are doing so within the requirements of the standard remains with your organisation.

Any part of an organisation including its infrastructure where cardholder data is processed, stored or transmitted will be within scope and is known as the cardholder data environment (CDE) and is required to be protected (becomes trusted), any part of the organisation and infrastructure that is out of scope and also the public internet is considered untrusted.

The standard looks at the protection between the CDE and the untrusted environment; it covers all aspects of providing that protection, not only technical controls but includes policies about those employees who have access to the cardholder data and the trusted environment as well as measures to help with the investigation of breaches.

In order to identify where cardholder data is within you environment it is necessary to track the flow of data across the infrastructure which requires up to date network documentation and knowledge of the processes within the organisation. Examining the data flow, starting from where it enters your environment, this can be through a gateway from the internet onto the network, via phone lines, the postal system or via fax. Following the movement of the data from its entry point(s) through the organisation until it permanently leaves organisation or is destroyed will identify all the components that are involved in the processing, storage and transmission of the cardholder data.

At this point I like to give a reminder that the systems involved in handling cardholder data not only include the hardware and software of the infrastructure, but everything connected to the CDE and also includes the employees of the organisation who handle the data processing, phone calls, post, fax, maintenance of the infrastructure, administration of servers etc, It will also cover not only your own organisation but any 3rd party organisations (know as service providers), who may be processing, storing or transmission cardholder data on your behalf or providing a service that effects the processing, storing or transmission cardholder data within your organisation such as an IT support company who have administrator rights to firewalls, routers or servers.
Analysis of the data flow should not only cover the active processing, storage or transmission of cardholder data but include the storage and processing of the backup media that may contain cardholder data including when these are stored off-site.

Upon completion of the analysis of the flow of cardholder data through the organisation, it would be desirable to reduce the scope of the CDE by segregating the environment where the cardholder data is from the rest of the environment and also reducing the number of systems and processes where cardholder data is handled.

In addition to identifying where cardholder data is expected to be, steps should be taken to ensure that cardholder data is not located in places where it is not expected to be, i.e. in a spreadsheet on the desktop of a data analyst who generated a report on a database containing cardholder data. In order to examine the whole environment of an organisation for unknown locations of cardholder data, techniques from e-Discovery can be used to identify the locations of cardholder data, hopefully all the identified locations will match the data flow analysis results, any mismatch shows cardholder data is being held in unidentified locations that are likely not part of the normal data flow.

Conducting a scoping exercise is only part of the story, the documentation of the reasons for identifying the CDE and reducing the scope from covering the whole organisation to a sub-set of the organisation (trusted network) needs to be kept for reference by any auditor. This is typical of the PCI DSS standard in that includes requirements that the documentation of the business reasons for implementing a control or deciding it is not applicable need to be part of the PCI-DSS documentation.

In the following blog entries I will be examining the requirements of the PCI DSS in turn and other issues affecting PCI DSS.

Monday 5 November 2012

eVoting

Due to the elections in the USA there have been a number of articles about eVoting and the security of voting machines.

It was interesting to see in an article "Internet-based and open source: How e-voting works around the globe" http://arstechnica.com/features/2012/11/internet-based-and-open-source-how-e-voting-is-working-around-the-globe/2/ a comparison of banking and eVoting

"With banking, you want to know—and have an extensive record—of what actions were taken when, and you associate them with a certain person. Voting, however, requires secrecy, and separation from a person and a specific identity. Furthermore, with banking, there is insurance and other precautions put into place to reassure customers against fraud."

The comparison is good, with online banking an audit trial of transactions is important, however for voting there is a need for security and strong authentication, however anonymity is not necessary a requisite for an eVoting system if it is to mirror some of the paper systems.

Within the UK there is security and authentication, although often this is weak, involved in voting but there is no anonymity, a person's identity is tied to their vote. All voting papers in the UK have an unique number that is recorded against the name of the voter when they attend at the polling station, this allows the authorities (with a court order) to trace how an individual voted.

Monday 22 October 2012

Microsoft Licence Scam

Another email scam that no one should fall for !

Download for a Windows Licence key from Password@Linkedin.com


Link goes to http://m.victorponta.ro/page2.htm recommend not following the link

I always wonder if anyone does fall for this type of scam.

Friday 19 October 2012

Safety & Security

An interesting point that came out from the IET conference on System Safety incorporating the Cyber Security in Edinburgh this month is that in German the word Sicherheit means both Security and Safety depending on the context. This highlighted the commonality between building safety systems and secure systems and ensuring flaws, vulnerabilities and risk are taken into account during the requirement phase of a project and then built in during the design and production. Naturally as security & safety are parts of requirements the testing will ensure these requirements have been met and to complete the lifecycle the maintenance of the system needs to ensure the requirements are continued to be built into the systems.

Techniques from writing safe code and for writing secure code are interchangeable and ensure that software flaws such as buffer overflow, inadequate input validation are eliminated. For those writing secure code the more mature safe code standards can help with guidance in the coding of projects ensuring that the effect of unexpected features are eliminated.
Buffer overflows are still a common problem with modern software, 50% of CERT advisories still have buffer overflows despite them being known since 1972. The techniques for preventing and detecting them are well understood by programmers and testers however they are still being found by researchers in software that has been deployed.
Adherence to coding standards and use of secure and safe programming techniques will reduce vulnerabilities in software, with web application attacks being the most common attack vector along with social engineering reducing the number of flaws in applications will reduce the number of successful attacks.

Wednesday 17 October 2012

Car Safety Standard Testing & InfoSec

At the IET cyber security conference listening to the keynote by Mike StJohn-Green discussing "cyber security - who says we are safe" he raised the comparison with car safety when buying security are we looking for the Volvo a name that is linked with car safety or looking for the best that meets our needs. He also mentioned about the NCAP rating which is a standard safety test in the EU for comparing the safety of cars, however one of the problems is that since safety sells cars, manufacturers design cars to get a higher rating, this does not mean that they are safe for occupants and pedestrians. This goes for a lot of information security equipment, the testing is not always representing the real world environment and give the assurances required by senior management to make decisions.

Sunday 7 October 2012

Tools (7th Oct)

A slightly longer than normal interval in my update on new and updated Information Security tools that I have come across or use. The tools are mainly those for PenTesting although other tools are sometimes included. As a bit of background into how I find these tools, I keep a close watch on twitter and other websites to find updates or new releases, I also search for pen testing and security projects on Source Forge. Some of the best sites I have found for details of new tools and releases are http://www.toolswatch.org/http://tools.hackerjournals.com/

Core Impact V12.5
http://blog.coresecurity.com/
CORE Impact® Pro is the most comprehensive software solution for assessing and testing security vulnerabilities throughout your organization. Backed by 15+ years of leading-edge security research and commercial-grade development, Impact Pro allows you to evaluate your security posture using the same techniques employed by today’s cyber-criminals.

The Social-Engineer Toolkit (SET)
https://www.trustedsec.com/september-2012/the-most-advanced-version-of-the-social-engineer-toolkit-to-date-released/
his version is the collection of several months of development and over 50 new features and a number of enhancements, improvements, rewrites, and bug fixes. In order to get the latest version of SET, download subversion and type svn co https://svn.trustedsec.com/social_engineering_toolkit set/

BurpSuite 1.5rc2
http://releases.portswigger.net/2012/10/v15rc2.html
Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application's attack surface, through to finding and exploiting security vulnerabilities.

SANS Investigate Forensic Toolkit (SIFT) Workstation Version 2.14
http://computer-forensics.sans.org/community/downloads
The SIFT Workstation is a VMware appliance, pre-configured with the necessary tools to perform detailed digital forensic examination in a variety of settings. It is compatible with Expert Witness Format (E01), Advanced Forensic Format (AFF), and raw (dd) evidence formats. The brand new version has been completely rebuilt on an Ubuntu base with many new capabilities and tools such as log2timeline that provides a timeline that can be of enormous value to investigators.

Wireshark is 1.8.3.
http://www.wireshark.org/download.html
Wireshark is the world's foremost network protocol analyzer. It lets you capture and interactively browse the traffic running on a computer network. It is the de facto (and often de jure) standard across many industries and educational institutions.

Friday 5 October 2012

Insider Fraud

Another example of the insider committing fraud, Verizon System Admin managed to take advantage of a scheme to keep critical infrastructure up to date.
http://www.fbi.gov/atlanta/press-releases/2012/former-verizon-wireless-network-engineer-sentenced-to-federal-prison-for-multi-million-dollar-fraud-scheme

Controls such as segregation of duties, supervision to prevent fraud where not rigorously in place, there should be systems in place whereby no single person has responsibility for payments and adequate controls are in place to guard against fraud, such controls require regular reviews of internal systems.


When it comes to preventing insider fraud, organizations would do well to more closely monitor experienced, mid-level employees with years on the job, according to a new study conducted by the CERT Insider Threat Centre of Carnegie Mellon University's Software Engineering Institute in collaboration with U.S. Secret Service.

The study found that, on average, insiders are on the job for more than five years before they start committing fraud and that it takes nearly three years for their employers to detect their crimes.

Secure Software Development

There are a number of good resources on secure programming from Microsoft describing a secure developmental life cycle and tools. If you are programming with Microsoft tools then it is recommended that you look at their resources, however the resources are not just of interest to the their development environment but are applicable in many cases to others. In just the same way, there other resources that will help if you are developing using the Microsoft tools such as OWASP and (ISC)2.

Microsofts Security Development Lifecycle (SDL) 

http://www.microsoft.com/security/sdl/default.aspx

The Microsoft Site gives a lot of information on using a Secure Development lifecycle much of which is transferable to other development environments, the principles behind the Microsoft's SDL and pretty much good solid principles.

Free tools from Microsoft

Some of these tools are more for the Microsoft programming environment than others

Threat Modeling Tool

The SDL Threat Modeling Tool helps engineers analyze the security of their systems to find and address design issues early in the software lifecycle.  To help make threat modeling a little easier, Microsoft offers a free SDL Threat Modeling Tool that enables non-security subject matter experts to create and analyze threat models by communicating about the security design of their systems, Analyzing those design for potential security issues using a proven methodology and suggesting and managing mitigations for security issues.

http://blogs.technet.com/b/security/archive/2012/08/23/microsoft-s-free-security-tools-threat-modeling.aspx

Attack Surface Analyzer

Attack Surface Analyzer can help software developers and Independent Software Vendors (ISVs) understand the changes in Windows systems’ attack surface resulting from the installation of the applications they develop.  It can also help IT professionals, who are responsible for managing the deployment of applications or the security of desktops and servers, understand how the attack surface of Windows systems change as a result of installing software on the systems they manage.

http://blogs.technet.com/b/security/archive/2012/08/02/microsoft-s-free-security-tools-attack-surface-analyzer.aspx

Anti-Cross Site Scripting Library

The Microsoft Anti-Cross Site Scripting Library V4.2.1 (AntiXSS V4.2.1) is an encoding library designed to help developers protect their ASP.NET web-based applications from XSS attacks. It differs from most encoding libraries in that it uses the white-listing technique -- sometimes referred to as the principle of inclusions -- to provide protection against XSS attacks. This approach works by first defining a valid or allowable set of characters, and encodes anything outside this set (invalid characters or potential attacks). The white-listing approach provides several advantages over other encoding schemes.

http://msdn.microsoft.com/en-us/security/aa973814.aspx

banned.h

The banned.h header file is a sanitizing resource that is designed to help developers avoid using and help identify and remove banned functions from code that may lead to vulnerabilities. Banned functions are those calls in code that have been deemed dangerous by making it relatively easy to introduce vulnerabilities into code during development.

http://blogs.technet.com/b/security/archive/2012/08/30/microsoft-s-free-security-tools-banned-h.aspx




PCI QSA

Just preparing for a new role that I have been asked to take up within IT Governance as a PCI QSA providing I can pass the exams.

Undertaken our own PCI Foundation course (http://www.itgovernance.co.uk/products/1858) and now working my way through the "PCI DSS: A Practical Guide to Implementing and Maintaining Compliance" by   Steve Wright (http://www.itgovernance.co.uk/products/1670).

Also being review the material from American Express, Visa & Mastercard about their compliance programmes.

The PCI Validation Requirements For Qualified Security Assessors (QSA) recommends the following documents

  • Payment Card Industry (PCI) Data Security Standard Security Audit Procedures (“PCI DSS Security Audit Procedures”)
  • PA-DSS Security Audit Procedures   

However having problems finding PCI DSS Security Audit Procedures on the PCI Security Standards Website which is a document that is referred by a number of others on the site. However a very early version of the PCI DSS Audit document seems to indicate it has now being incorporated into the main documentation. It is a shame that the Audit procedures are not a clearly defined document as the PCI SSC website has a lot of useful documentation for the standard, as do the main card issuers sites, having worked with many standards from a range of industry I have found often there is a lack of freely available documentation about them, which does not seem the case with the PCI DSS.


Ethics

A post about Ethics from the BCS "IT industry 'must get serious' about ethics" http://bit.ly/WsTyyG  which highlights the IT industry should get serious about ethics. This post was based on Andrea Di Maio blog "It Is Time for Industry, Government and Consumers To Get Serious About IT Ethics" http://bit.ly/QWluKL

I found Andrea's blog very interesting as I have been involved in teaching of ethics to university students and covering it in Information Security course and it is also covering in the (ISC)2 CISSP body of knowledge. In particular refer to consumers is a good point when you consider that ethics are the moral principles that govern a person's or group's behaviour and in terms of code of ethics of all professional institutions they refer to not only working towards the general good of the professional body but of society in general.

The blog makes some very good points and gives some interesting examples to illustrate those points, the comments by Bill McCluggage at the end of the blog are also interesting, I look forward to more from both Andrea and Bill as it will form some good background for my activities.

I can see in the blog the line of reasoning that started with Norbert Wiener, his work was in the area of "Loss of human control and oversight" and his article "A Scientist Rebels" for the January 1947 issue of The Atlantic Monthly urged scientists to consider the ethical implications of their work. I personally think that in particular with IT it has great affect on society and all the stakeholders, Industry, Government and Consumers in the words of Andrea Di Maio need to be involved in ensuring that IT ethics are taken seriously.

Sept ADSL Router Analysis

September was a very quiet month with very few probes until the end of month, just 21 probes from 3 countries.


The detected events broke down country wise as follows

CountrySource IPsNo of attack from country
Turkey1919
Japan11
Malaysia11


Wednesday 3 October 2012

PenTesting Pitfall


An article on Softpedia highlight one of the more unusual pitfalls of conducting PenTesting

http://news.softpedia.com/news/Hack-Attack-on-City-of-Tulsa-Website-Turns-Out-to-Be-Part-of-Penetration-Testing-296151.shtml

As it turns out, hackers were not responsible for the breach. Instead, it was a company hired by the city’s IT department to perform penetration testing. The security firm utilised a test procedure that was unfamiliar to the IT department.

This shows the importance of engaging with the client when scoping the PenTest and ensuring that they understand the process and have defined lines of communication between the client and the PenTesters.

After the incident, the IT department managed to further strengthen the city’s systems, which are said to be targeted thousands of times daily by cyberattacks. It also made officials realise that incident management for IT security should be treated just like the one for natural disasters. The cost of the response to the false incident was around $20,000 (15,000 EUR) for the operation.

Monday 1 October 2012

Pension email scan

It didn't take look after the announcement that the UK Government new requirements on workplace pensions came into force on the first of October for the first email to appear in my inbox that directs me to malware


The wording within the email really does not make sense and I hope no one gets taken by this simple attempt.

Sunday 30 September 2012

e-Mail Scams


Some of the E-mail scams I receive today, particularly liked the audacity of the first one claiming to be from the FBI and they say

"Since the Federal Bureau of Investigation is involved in this transaction, you have to be rest assured for this is 100% risk free it is our duty to protect the American Citizens."

Shame the they didn't spoof an FBI address in the senders field

Also I won the UK email address Lottery in Bangkok-Thailand today and have to contact a Japanese email address to collect my winnings



Federal Bureau of Investigation (FBI)
Anti-Terrorist And Monitory Crime Division.
Federal Bureau Of Investigation.
J.Edgar.Hoover Building Washington Dc
Customers Service Hours / Monday To Saturday Office Hours Monday To Saturday.

Attn: Beneficiary,

This is to Officially inform you that it has come to our notice and we have thoroughly Investigated with the help of our Intelligence Monitoring Network System that you are having an illegal Transaction with Impostors claiming to be Prof. Charles C. Soludo of the Central Bank Of Nigeria, Mr. Patrick Aziza, Mr Frank Nweke, Dr. Philip Mogan, none officials of Oceanic Bank, Zenith Banks, Barr. Derrick Smith, kelvin Young of HSBC, Ben of FedEx, Ibrahim Sule,Larry Christopher, Dr. Usman Shamsuddeen, Dr. Philip Mogan, Paul Adim, Puppy Scammers are impostors claiming to be the Federal Bureau Of Investigation. During our Investigation, we noticed that the reason why you have not received your payment is because you have not fulfilled your Financial Obligation given to you in respect of your Contract/Inheritance Payment.

Therefore, we have contacted the Federal Ministry Of Finance on your behalf and they have brought a solution to your problem by coordinating your payment in total USD$11,000.000.00 in an ATM CARD which you can use to withdraw money from any ATM MACHINE CENTER anywhere in the world with a maximum of $4000 to $5000 United States Dollars daily. You now have the lawful right to claim your fund in an ATM CARD.

Since the Federal Bureau of Investigation is involved in this transaction, you have to be rest assured for this is 100% risk free it is our duty to protect the American Citizens. All I want you to do is to contact the ATM CARD CENTER via email for their requirements to proceed and procure your Approval Slip on your behalf which will cost you $150.00 only and note that your Approval Slip which contains details of the agent who will process your transaction.

CONTACT INFORMATION

NAME: Mr. Kelvin Williams
EMAIL:mrkelvinwillams@yahoo.cn

Do contact Mr. Kelvin Williams of the ATM PAYMENT CENTER with your details:

FULL NAME:
HOME ADDRESS:
TELL:
CELL:
CURRENT OCCUPATION:
BANK NAME:
AGE:

So your files would be updated after which he will send the payment information's which you'll use in making payment of $150.00 via Western Union Money Transfer or Money Gram Transfer for the procurement of your Approval Slip after which the delivery of your ATM CARD will be effected to your designated home address without any further delay.We order you get back to this office after you have contacted the ATM SWIFT CARD CENTER and we do await your response so we can move on with our Investigation and make sure your ATM SWIFT CARD gets to you.

Thanks and hope to read from you soon.

ROBERT S. MUELLER, III
DIRECTOR, FEDERAL BUREAU OF INVESTIGATION UNITED STATES DEPARTMENT OF JUSTICE WASHINGTON, D.C. 20535


Note: Do disregard any email you get from any impostors or offices claiming to be in possession of your ATM CARD, you are hereby advice only to be in contact with Mr. Kelvin Williams of the ATM CARD CENTER who is the rightful person to deal with in regards to your ATM CARD PAYMENT and forward any emails you get from impostors to Mr. Kelvin Williams.





UK LOTTERY ORGANIZATION
TICKET FREE/ONLINE E-MAIL ADDRESS WINNINGS DEPARTMENT.

Are you the correct owner of this email address? If yes then be glad this day as the result of the UK lotto online e-mail address and free-ticket winning draws of August 2012 held in Bangkok-Thailand has just been released and we are glad to announce to you that your email address won you the sweepstakes in the first category and you are entitled to claim the sum of Four Million, Six Hundred Thousand USA Dollars.

Your email address was entered for the online draw on this ticket #
68494-222 us  on this email address
uklottoemailwingthailand@yahoo.co.jp  for options on how to receive your won prize of US$4.6M.

To enable us ascertain you as the rightful winner and receiver of the $4.6Million , MAKE SURE you include the below listed information in your contact mail to him.

Your complete official names, country of origin and country of residence/work, contact telephone and mobile numbers, address, amount won, free ticket and lucky numbers, date of draw. OPTIONAL: - [Sex, age, occupation and job title].


Yours Faithfully,
Mr. Aaron Jones.
Online Winning Notification Department.
UK LOTTERY ORGANIZATION.



Computer Ethics (History)

Computer Ethics (History)

Just because something is not illegal does not make it right

Computer Ethics help us make decisions on what is right for society rather than what is right for ourselves


  • Ethics is a broad philosophical concept that goes beyond simple right and wrong, and looks towards ‘the good life’
  • Morals are created by and define society, philosophy, religion, or individual conscience
  • A value system is a set of consistent ethic values and measures 
  • A personal value system is held by and applied to one individual only
  • A communal or cultural value system is held by and applied to a community/group/society. Some communal value systems are reflected in the form of legal codes or law
  • Code of Ethics is an instrument that establishes a common ethical framework for a large group of people

Ethical Models

Utilitarian Ethics

  • Jeremy Bentham and John Stuart Mill created Utilitarian Ethics in the 19th century
  • The basic premise is that actions that provide the greatest amount of good over bad or evil are ethical or moral choices 
  • There have been different interpretations of it 
  • One says that, if in a particular situation that the balance of good will be greatest if a particular action is taken then to take that action 
  • The next major viewpoint on Utilitarian Ethics would take the stance that it is not the action which produces the greatest good for a particular situation but the action that produces the greatest good 'over all like situations' in a society that should be taken

The Rights Approach

  • The Rights Approach is based on the principle that individuals have the right to make their own choices 
  • To judge the right and wrong or moral vs immoral, of our actions under this system we would have to ask ourselves how our actions affect these rights of those around us
  • The greater the infraction our actions cause against those around us the more unethical those actions are
  • For example if it is immoral to lie then you should never lie under any circumstances

The Common-Good Approach

  • Plato, Aristotle, and Cicero were the beginning of the Common-Good Approach, which proposes that the common good is that which benefits the community
  • This type of system is where we get health care systems and public works programs
  • For example stealing would never be ethical because it would damage (take resources away from) society or our community

Cyber Ethics


Key points in the development of Cyber Ethics

1940’s
Norbert Wiener
The human use of human beings in 1950
Mid 1960’s
Donn B. Parker
Rules of Ethics in Information Processing, communications of the ACM in 1968
Development ofthe first code of professional conduct for the ACM in 1973
Late 1960’s
Joseph Weizenbaum
Wrote ELIZA whilst at MIT
Computer Power and Human Reason 1976
Mid 70’s
Walter Maner
Coined the phase Computer ethics
Published ‘Starter kit in computer ethics’ 1978
1980’s
James Noor
‘What is computer ethics’ in Computer & ethics
Deborah Johnson
Published ‘Computer ethics’
1991
Maner Terrell Bynum
First international, multidisciplinary conference on computer ethics



After World War 2, Norbet Weiner helped develop the theories of cybernetics, robotics, computer control, and automation. Wiener became increasingly concerned with what he believed was political interference with scientific research, and the militarization of science. He urged scientists to consider the ethical implications of their work.

Wiener Published a series of books on the subject

  • Cybernetics (1948)
  • The Human Use of Human Beings (1950)
  • God and Golem, Inc (1963)

Weiner’s ‘Ethical Methodology’

  1. Identify an ethical question or case regarding the integration of information technology into society. 
  2. Clarify any ambiguous or vague ideas or principles that may apply to the case or the issue in question. 
  3. If possible, apply already existing, ethically acceptable principles, laws, rules, and practices that govern human behaviour in the given society. 
  4. If ethically acceptable precedents, traditions and policies are insufficient to settle the question or deal with the case, use the purpose of a human life plus the great principles of justice to find a solution that fits as well as possible into the ethical traditions of the given society. 

History of Ethical Codes


The Code of Fair Information Practices. 

In 1973 the Secretary's Advisory Committee on Automated Personal Data Systems for the U.S. Department of Health, Education and Welfare recommended the adoption of the following Code of Fair Information Practices to secure the privacy and rights of citizens:

  • There must be no personal data record-keeping systems whose very existence is secret; 
  • There must be a way for an individual to find out what information is in his or her file and how the information is being used; 
  • There must be a way for an individual to correct information in his records; 
  • Any organization creating, maintaining, using, or disseminating records of personally identifiable information must assure the reliability of the data for its intended use and must take precautions to prevent misuse; and 
  • There must be a way for an individual to prevent personal information obtained for one purpose from being used for another purpose without his consent. 


Internet Activities Board (IAB) (now the Internet Architecture Board) and RFC 1087. 

RFC 1087 is a statement of policy by the Internet Activities Board (IAB) posted in 1989 concerning the ethical and proper use of the resources of the Internet. The IAB "strongly endorses the view of the Division Advisory Panel of the National Science Foundation Division of Network, Communications Research and Infrastructure," which characterized as unethical and unacceptable any activity that purposely:

  • Seeks to gain unauthorized access to the resources of the Internet, 
  • Disrupts the intended use of the Internet, 
  • Wastes resources (people, capacity, computer) through such actions, 
  • Destroys the integrity of computer-based information, or 
  • Compromises the privacy of users.

Ten Commandments of Computer Ethics (Computer Ethics Institute, 1992)

  • Thou shalt not use a computer to harm other people
  • Thou shalt not interfere with other people's computer work
  • Thou shalt not snoop around in other people's computer files
  • Thou shalt not use a computer to steal
  • Thou shalt not use a computer to bear false witness
  • Thou shalt not copy or use proprietary software for which you have not paid
  • Thou shalt not use other people's computer resources without authorisation or proper compensation
  • Thou shalt not appropriate other people's intellectual output
  • Thou shalt think about the social consequences of the program you are writing or the system you are designing
  • Thou shalt always use a computer in ways that ensure consideration and respect for your fellow humans 


National Conference on Computing and Values. 

The National Conference on Computing and Values (NCCV) was held on the campus of Southern Connecticut State University in August 1991. It proposed the following four primary values for computing, originally intended to serve as the ethical foundation and guidance for computer security:

  • Preserve the public trust and confidence in computers. 
  • Enforce fair information practices. 
  • Protect the legitimate interests of the constituents of the system. 
  • Resist fraud, waste, and abuse. 

The Working Group on Computer Ethics.

In 1991, the Working Group on Computer Ethics created the following End User's Basic Tenets of Responsible Computing: 
  • I understand that just because something is legal, it isn't necessarily moral or right. 
  • I understand that people are always the ones ultimately harmed when computers are used unethically. The fact that computers, software, or a communications medium exists between me and those harmed does not in any way change moral responsibility toward my fellow humans. 
  • I will respect the rights of authors, including authors and publishers of software as well as authors and owners of information. I understand that just because copying programs and data is easy, it is not necessarily right. 
  • I will not break into or use other people's computers or read or use their information without their consent. 
  • I will not write or knowingly acquire, distribute, or allow intentional distribution of harmful software like bombs, worms, and computer viruses. 

National Computer Ethics and Responsibilities Campaign (NCERC). 

In 1994, a National Computer Ethics and Responsibilities Campaign (NCERC) was launched to create an "electronic repository of information resources, training materials and sample ethics codes" that would be available on the Internet for IS managers and educators. The National Computer Security Association (NCSA) and the Computer Ethics Institute cosponsored NCERC. The NCERC Guide to Computer Ethics was developed to support the campaign. 
The goal of NCERC is to foster computer ethics awareness and education. The campaign does this by making tools and other resources available for people who want to hold events, campaigns, awareness programs, seminars, and conferences or to write or communicate about computer ethics. NCERC is a non-partisan initiative intended to increase understanding of the ethical and moral issues unique to the use, and sometimes abuse, of information technologies. 


The Hacker Ethic

Steven Levy (born 1951) is an American journalist who has written several books on computers, technology, cryptography, the Internet, cybersecurity, and privacy.

In 1984, he wrote a book called Hackers: Heroes of the Computer Revolution, in which he described a “hacker ethic”, which became a guideline to understanding how computers have advanced into the machines that we know and use today. He identified this Hacker Ethic to consist of key points such as that all information is free, and that this information should be used to “change life for the better”.

Access to computers—and anything which might teach you something about the way the world works—should be unlimited and total. 

Always yield to the Hands-On Imperative! Levy is recounting hackers' abilities to learn and build upon pre-existing ideas and systems. He believes that access gives hackers the opportunity to take things apart, fix, or improve upon them and to learn and understand how they work. This gives them the knowledge to create new and even more interesting things. Access aids the expansion of technology.

All information should be free

Linking directly with the principle of access, information needs to be free for hackers to fix, improve, and reinvent systems. A free exchange of information allows for greater overall creativity. In the hacker viewpoint, any system could benefit from an easy flow of information, a concept known as transparency in the social sciences. As Stallman notes, "free" refers to unrestricted access; it does not refer to price.

Mistrust authority — promote decentralization

The best way to promote the free exchange of information is to have an open system that presents no boundaries between a hacker and a piece of information or an item of equipment that he needs in his quest for knowledge, improvement, and time on-line. Hackers believe that bureaucracies, whether corporate, government, or university, are flawed systems.

Hackers should be judged by their hacking, not criteria such as degrees, age, race, sex, or position

Inherent in the hacker ethic is a meritocratic system where superficiality is disregarded in esteem of skill. Levy articulates that criteria such as age, sex, race, position, and qualification are deemed irrelevant within the hacker community. Hacker skill is the ultimate determinant of acceptance. Such a code within the hacker community fosters the advance of hacking and software development. In an example of the hacker ethic of equal opportunity, L. Peter Deutsch, a twelve-year-old hacker, was accepted in the TX-0 community, though he was not recognized by non-hacker graduate students.

You can create art and beauty on a computer

Hackers deeply appreciate innovative techniques which allow programs to perform complicated tasks with few instructions. A program's code was considered to hold a beauty of its own, having been carefully composed and artfully arranged. Learning to create programs which used the least amount of space almost became a game between the early hackers.

Computers can change your life for the better

Hackers felt that computers had enriched their lives, given their lives focus, and made their lives adventurous. Hackers regarded computers as Aladdin's lamps that they could control. They believed that everyone in society could benefit from experiencing such power and that if everyone could interact with computers in the way that hackers did, then the Hacker Ethic might spread through society and computers would improve the world. The hacker succeeded in turning dreams of endless possibilities into realities. The hacker's primary object was to teach society that "the world opened up by the computer was a limitless one"






Friday 28 September 2012

IET Cybersecurity Conference 2012

I will be attending the "The 7th International IET System Safety Conference, incorporating the Cyber Security Conference 2012" http://bit.ly/NUZuQ3 which is running from the October 15th-18th, 2012 at the Radisson Blu Hotel, Edinburgh, UKto present a paper on the "Cost effective assessment of the infrastructure security posture"

Abstract of paper


Today organisations are facing a threat from cyber-attack, whether they are international conglomerate or a one man outfit, none are immune to the possibility of attack if there have a connection to or presence on the Internet. The attacks can take many forms from the Distributed Denial of Service through to targeted phishing emails, many attacks result in low tangible costs but can have high intangible costs to the targeted organisation such as lose of brand reputation and loss of business. Many small businesses have taken weeks to find their websites have been blacklisted by search engines as their site has been compromised and is now hosting malware.

Although attack sophistication has grown since the password guessing attacks in the early 1980’s to the sophisticated Advanced Persistent Threat (APT) that is being seen today, the skill level required to launch attacks has dropped as the development of hacking toolkits and malware toolkits have given the script kiddie hack sophisticated tools with simple GUI interfaces. The hacking group Anonymous’s use of tools such as the Low Orbit Ion Cannon (LOIC) available on sourceforge and github, enabled thousands of individuals who have no programming knowledge to take part in their orchestrated campaigns. The high profile of cyber-activity is encouraging increasing number of people to dabble with easily findable tools and scripts and many progress deeper into illegal activity.

The motivation of attackers targeting an organisation is extremely wide ranging from the organised criminal gangs looking for monetary return to rival organisations or countries looking for intellectual property, hacktivists looking to extract revenge for a perceived infringement of their freedom through to random attacks because they can or they are just developing and testing their skills on a random target. All this requires an organisation to protect themselves from attack whether they are hosting a website on a third party’s infrastructure or have a large number of connected gateways on the internet and offering multiple services hosted on their own infrastructure to the general public and other organisations.

An organisation’s security posture is an indication the countermeasures that have been implemented to protect the organisations resources. The countermeasures are security best practice that are appropriate to the organisations risk appetite and the business requirements. The security posture is defined by an organisations security policy and its mission statement and business objectives.

Countermeasures come with a cost which should not exceed the value of the resources they are protecting and they should be effective, provide value for money, and a return on investment for the organisation

Measuring how the organisations actual security posture relates to it’s agreed acceptable level of risk is a problem that is faced by organisations when looking at whether their countermeasures are effective and providing value for money and a return on investment. There are two methodologies that can be used.
  1. Auditing – which is the mechanism of confirming that the processes or procedures agree to a master checklist for compliance
  2. Assessing – is a more active, or intrusive, testing methodology to adequately assess your processes or procedures that cannot be adequately verified using a checklist or security policy
This paper investigates the surface attack area of an organisations infrastructure and applications examining the cases where the use of cloud and mobile computing have extend the infrastructure beyond the traditional perimeter of organisations physical locations and the challenges this causes in assessing the security posture.

A review of the use of assessment methodologies such as vulnerability assessment and penetration testing to assess the infrastructure and application security posture of an organisation shows how they can provide identification of vulnerabilities which can aid the risk assessment process in developing a security policy. It will demonstrate how these methodologies can help in assessing the effectiveness of the implemented countermeasures and aid in evaluation as to whether there are provide value for money and a return on investment.

It is proposed that a long term strategy of using both methodologies for assessing the security posture based on the business requirements will provide the following benefits
  • Cost effective monitoring of the infrastructure and security posture
  • Ensuring that the countermeasures retain effectiveness over time
  • Responding to the continual changing threat environment
  • Ensuring that value for money and return on investment are maintained

Wednesday 26 September 2012

PCI, Block ciphers & TLSv1


One of the common problems appearing when scanning secure websites is a reported vulnerability in TLSv1 with cipher-block chaining (CBC); see the sample report generated by scanning tools about this problem.

Summary:
SSL/TLS Protocol Initialization Vector Implementation Information Disclosure Vulnerability
Synoposis: 
It may be possible to obtain sensitive information from the remote host with SSL/TLS-enabled services.
Impact:
Vulnerability exists in SSL 3.0 and TLS 1.0 that could allow information disclosure if an attacker intercepts encrypted traffic served from an affected system. TLS 1.1, TLS 1.2, and all cipher suites that do not use CBC mode are not affected.
Resolution: 
Configure SSL/TLS servers to only use TLS 1.1 or TLS 1.2 if supported.
Configure SSL/TLS servers to only support cipher suites that do not use block ciphers.
Apply patches if available.
Note that additional configuration may be required after the installation of the MS12-006 security update in order to enable the split-record countermeasure

The problem with configuring the server to use TLS 1.1 or TLS 1.2 only is that XP with IE8 only supports TLS 1.0 and SSL 2.0 and 3.0. Whilst Windows 7 with IE8 supports TLS 1.0, 1.1 and 1.2 it is enabled by default. This can affect the users of a website; XP is still used by around 42% of all clients as measured by Net Marketshare.

Operating System
Market Share
Windows 7
42.76%
Windows XP
42.52%
Windows Vista
6.15%
Mac OS X 10.7
2.45%
Mac OS X 10.6
2.38%


A more user friendly method to get around the vulnerability is not to use CBC ciphers on the server such as those listed

PSK-AES256-CBC-SHA
EDH-RSA-DES-CBC3-SHA
EDH-DSS-DES-CBC3-SHA
ADH-DES-CBC3-SHA
DES-CBC3-SHA
DES-CBC3-MD5
PSK-3DES-EDE-CBC-SHA
KRB5-DES-CBC3-SHA
KRB5-DES-CBC3-MD5
RC2-CBC-MD5
PSK-AES128-CBC-SHA
IDEA-CBC-SHA
EDH-RSA-DES-CBC-SHA
EDH-DSS-DES-CBC-SHA
ADH-DES-CBC-SHA
DES-CBC-SHA
DES-CBC-MD5
KRB5-DES-CBC-SHA
KRB5-DES-CBC-MD5
EXP-EDH-RSA-DES-CBC-SHA
EXP-EDH-DSS-DES-CBC-SHA
EXP-ADH-DES-CBC-SHA
EXP-DES-CBC-SHA
EXP-RC2-CBC-MD5
EXP-KRB5-RC2-CBC-SHA
EXP-KRB5-DES-CBC-SHA
EXP-KRB5-RC2-CBC-MD5
EXP-KRB5-DES-CBC-MD5