All computers talk; whether it be your laptop, tablet, mobile phone or smart TV, you may never have heard them speak but they have a voice, says Giacomo F. Mosca FBCS, Senior Director, Technical Operations, Transparent Technology.

When you turn them on, when they appear to be idle, when they are in use and when they’re seemingly switched off. If you were to run an instance of Wireshark (previously known as Ethereal) you’d know what I mean, and it’s these computers talking that attackers are looking for; they’re looking for particular conversations between computers that they could exploit.

These conversations reveal, in an in-depth way, what they are doing and in turn the actions of the people operating them; being able to interpret these communications between computers provides a powerful insight.

As you can imagine in any large interconnected network, such as inside a corporate network or the internet, there would be a vast number of these conversations occurring.

From a system administrator’s point of view, they would need a way to filter out the mundane discussions and leave the interesting ones that could reveal to us any unusual activity. This is where security incident event monitoring (SIEM) provides us with the insight.

A managed and monitored SIEM solution is key for any organisation; it sucks up the data (conversations) being spat out from the interconnected equipment located around the company. It then correlates the data in a central location, locating patterns and identifying to users managing the tool where potential issues or unusual events on the network may be occurring, and so items may be investigated and either found to be false alarms or where deeper investigation is necessary.

The keys to getting the most valuable insight from SIEM are by:

  1. placing sensors around key points of the network;
  2. fine tuning the alerts being correlated by the algorithms to ensure the right type and severity of alerts are being highlighted;
  3. possessing the proper resources to perform timely investigations, which can subsequently sound the alarm and taking remedial action, if necessary.

Technologies such as SIEM, while not the only form of detection, are a power tool, playing a key role in the frontline of early detection. They can alert organisations to activities that they are not usually used to seeing on their estates, and will put them in a position to take remedial action to protect their assets.

Another key tool available is an identity and access management (IAM) system. The simplest form of one, if you work in an office today, will be in the form of your access badge, the policy set around it and CCTV cameras.

Together they define the areas you can access, at what times and through the cameras, security staff looking for signs of suspicious activity. I like the saying over at Dell for its one identity solution, essentially, identity and access management (IAM) keeps the good guys good and keeps the bad guys out.

In the IT realm, these systems ensure users aren’t accessing restricted areas or data, coming in through safe connections, low risk devices (BYOD vs. corporate managed computers), at the expected times of day, from anticipated parts of the world. As with systems which provide the greatest value, they require tuning and aligning to an organisation’s operating model and policies.

These systems utilise a combination of risk profiles and company policy to control access, in the scenario where you’re normally based out of London, possessing a corporately managed computer, operating in normal working hours. Even with the proper login credentials, originating from a non-corporate managed computer, say at 2am, from an unexpected location such as China, these identity systems would prohibit your access.

These new identity systems, even record the sessions of privileged user activities much like CCTV cameras for systems, ensuring there is no foul play, from those who hold the keys to the kingdom, the privileged account holders where limited access restrictions are in place.

Towards the end of May 2014, eBay announced it had discovered in late February or early March 2014, approximately 145 million accounts had been compromised. This reportedly had occurred by the credentials of genuine eBay employees being taken over by malicious users and using those credentials to access a database containing user’s encrypted password, email address, name, physical address, phone number and date of birth.

Companies that have made their way into our daily vocabulary such as eBay, one of the top 100 well-known brands globally, places a significant value and in turn proportional investment on supporting trust and security, the foundations of their business.

The basis of all information security is understanding the risks to the business, understanding the likelihood of the identified risks, the cost to fix these identified risks and the potential impact of the identified risks.

To stay competitive businesses cannot ensure protection against all eventualities, but can indeed prepare themselves to mitigate against those possible high-impact risks and accept those less likely, low impact risks on their evolving risk registers.

Sometimes the likelihood of the impact may not warrant the supporting costs to correct the risks. When activity occurs with stolen, but valid credentials, such as in eBay’s case, the typical tell-tale signs of malicious activity are harder to identify.

Two factor authentication (2FA) by its nature would be the first line of defence against even attackers with the valid credentials, a risk reduction step for safeguarding a user’s identity. The model comprises of something you know (typically a password), and something you have (typically a soft or hard token generating a unique value verifiable by the destination server).

A tuned SIEM system in place could also reveal valuable insights being fed from the supporting infrastructure of firewalls, intrusion prevention / detection systems (IPS / IDS), in conjunction with an IAM solution as mentioned earlier.

Key business, governments and well-known brands such as eBay, where they possess information of value, laying deep under layers of protection, will always be the subject of attacks. In recent years, attacks have become more complex, moving away from one-dimensional cross-site scripting (XSS) and malware, mutating into the advanced persistent threat (APT).

According to an analysis undertaken by FireEye in 2013, approximately 13 unique APTs attacks were occurring daily. Companies such as FireEye are rapidly growing with the introduction of its bait systems acting as honeypots targeting hackers, where its solutions create fake assets within the company infrastructure environments, intermingled with real servers and services.

When a malicious person gains access to these fake assets, the software reports back to a central control advising of the attack placing the company in a powerful position for responses and not wasting resources on investigating false positives.

The Information Security Forum’s (ISF) Threat Horizon 2016 is an interesting resource, identifying the main threats to consider for the remainder of this year and looking forward to 2016 - in conjunction with Verizon’s data breach investigations report (DBIR) 2014 looking back at global metrics of breaches in aggregate.

In April a vulnerability was discovered by a Google Security engineer in OpenSSL, an implementation of securing the transit of data between users and servers, for passing sensitive data across networks, most commonly recognised in popular browsers as a little padlock icon in the address bar.

The vulnerability was subsequently titled Heartbleed due to it being detected in the extension of OpenSSL with the same name. This extension, which continuously maintains a secure communication link between user and server, without the need for a longer renegotiation of the connection every time, ensures the secure connection is still valid.

The Heartbleed vulnerability allowed anyone to read, in 64kb sections (approximately 10.5k words) at a time, protected secrets from vulnerable OpenSSL servers, such as: encryption keys allowing decryption of any past or future transactions, usernames and passwords, other details that are typically protected by encryption such as financial details.

The implementation of OpenSSL is a widely used form of securing communication links, and while many sites had no evidence of accounts being compromised, some names of sites that you would recognise that were potentially affected included Facebook, Dropbox, Instagram, Pinterest, Google, Yahoo, Flickr and YouTube.

Even now, months after the vulnerability was discovered, there are reportedly still over 300,000 public servers running OpenSSL that remain vulnerable. In the software engineering and development arena, issues such as the Heartbleed vulnerability are known as software defects.

A software defect typically refers to where the software is producing results not as defined within the requirements, or not as intended (no actual defined requirement which is being broken, but still operating in an unusual manner).

Any changes to a software product go through a software development life cycle (SDLC), there are multiple methodologies and their implementations vary between companies and development shops, but the principals remain the same. A change to the software originates as a request for enhancement (RFE) / change request (CR) identifying what requires to be changed and why.

Once approved this drives low level, detailed requirements to be written usually by someone possessing a business analyst (BA) title. These requirements detail what the software is expected to do based on a particular input, usually quantified, so it’s easy to measure when it has been built and enters formal testing.

The importance of the text defined by the business analyst at the requirements definition phase and rigidity of the requirements derived are sometimes forgotten. The time invested up front culminating in the requirements defined here sets the pace and the outcome for the whole process of development, testing, defect fixing and release.

A flexible, widely interpretable requirement will turn into code, fly through testing and release and could generate issues when released in the wild, all originating from the requirement defined some time earlier.

After the requirements are agreed, the project moves into the development phase, where the code to support the new requirements, or change, is written. This is written by a developer / software engineer or group where the work has been segmented into different subsystems or logical areas to be worked on without stepping on each other’s toes.

At this phase, prior to moving into formal testing by a person holding a quality assurance (QA) or integration and test (I&T) role, the code produced by the developer(s) will usually be peer-reviewed by other developers and re-checked by the person writing the code.

These reviews are undertaken to understand and locate any potential issues within the code that could be fixed or identified as particular risk areas when testing, before moving to the next phase. This review phase is a best practice to attempt to avoid any significant quantity or sizeable defects to be found in the testing phase.

The formal testing phase begins with a person holding a quality assurance (QA) or integration and test (I&T) role by creating a testing plan on how to approach the testing. The testing will include: the verification of the new requirements having being met by taking the original requirements defined by the business analyst (BA).

Secondly, ensuring that any previous requirements and functionality are still maintained after the introduction of the new code, also known as integration testing. Thirdly, the verification of any guidelines outlined are maintained, stress testing to ensure the software can take the strain of any volumes of users or data and maintain its performance.

Additionally, if the software is being released into the public domain, such as accessed over the internet, or used by a mass group of people, or is sensitive in nature, it may also undergo penetration testing from specialist security consultants to ensure the software may not be manipulated to exploit sensitive data.

Ideally software is released without any defects at all, but this is almost impossible due to the complexity of code; to mitigate this organisations place rules on their releases. Rules such as, where the code is not to be made widely available with any priority defects, those which have major impacts to the users consuming the software.

Any smaller defects are prioritised for future service packs or patches, released at regular intervals and worked on in the background by the software development teams. It is typical for the software development management who are responsible for the software developers and testing divisions, as it’s usual for the business to give tight deadlines and low budgets, to produce the deliverables so they may begin realising the return on its investment (ROI) for the project.

It is important to:

  1. define rigid requirements by business analysts (BA) upfront before any code is written,
  2. give developers sufficient time within projects to verify their peers’ and their own work,
  3. provide quality assurance and sufficient time to test and the need for penetration testing.

These are only highlighted potentially months later when an issue arises in the field. Additional time within these areas could prevent incidents to business operations, revenue and image. It takes a mature, forward-thinking organisation and their management teams to look at problems holistically, examining the entire machine instead of focusing on improving a single cog.

Organisations will never be free of vulnerabilities, or possesses the ability to combat every kind of risk. With an appropriate and evolving information security risk register in place, prioritising the risks relative to their business, and keeping on top of and analysing the risk register evolutions.

By implementing the appropriate tools, controls, process and people to combat the key risks or greatest impacts, they will be at the forefront of enabling their business to operate securely.