Enterprise database and information storage infrastructures, holding the crown jewels of an organisation, are subject to a wide range of abuses and attacks, particularly when left vulnerable by poor system design or configuration.

Martin Pill CITP, Principal Consultant and Security Architect with BSI Cybersecurity and Information Resilience, describes the most critical of these followed by recommendations for mitigating the risk of each.

1. Cloud database configuration errors 

Barely a week goes by without a new data breach caused by insecurely configured cloud databases or storage services. Data exposed includes; information on 90% of Panama citizens, 20.8m Ecuadorian citizens, 48 million social media records, 142GB of documents; the list goes on and on.

Public Cloud service IP addresses are not secret and are continually scanned for vulnerabilities by malicious persons and security researchers.
Avoid them finding your crown jewels by:

  1. Knowing what data you hold, where it is located and implementing effective infrastructure configuration and change management procedures. Many breaches have been of data stores organisations were unaware of, or which had been created insecurely on an ad-hoc, uncontrolled basis.
  2. Being aware that cloud databases and other data stores may default to being open to the internet on creation and rely on the service user to properly lock them down, such as by using the database firewall. Implement procedures to ensure this happens.
  3. Ensuring all databases and data stores are configured with strong authentication by default. Many breaches were easily accomplished due to lack of authentication.
  4. Implementing procedures to monitor your cloud perimeter for insecure data services. Even if you are fully secure now, you are only an accidental mouse click away from exposure.

2. SQL injection

SQL injection vulnerabilities occur when application code contains dynamic database queries which directly include user supplied input. This is a devastating form of attack and BSI Penetration Testers regularly find vulnerable applications that allow complete authentication bypass and extraction of the entire database.

Preventing SQL injection is relatively straightforward:

  1. Avoid the use of dynamic queries within applications. Use of prepared statements with parametrised queries will stop SQL injection.
  2. Implement user input validation before that input is passed to the application. This is a very worthwhile additional defence which also helps thwart many other attacks. For an added bonus, include monitoring / alerting at the data tier for any use of dynamic queries. This will detect an attacker who has, for example, managed to bypass the application and query the database directly.
  3. Also bear in mind that NoSQL databases are also subject to injection attacks; controls such as strict input validation are needed to reduce the likelihood of these.

3. Weak Authentication

Weak authentication has many facets, ranging from brute forcing of the user interface, to insecure storage of the database credentials used by an application.

  1. Implement brute force controls such as account lockout after a set number of invalid attempts. Use password blacklisting to prevent users choosing common passwords.
  2. Don’t require users to regularly change their passwords as this encourages easy to remember (and hence guess) passwords.
  3. Consider implementing multi-factor authentication so that an attacker needs more than knowledge of a username and password to illegally access data.
  4. Don’t store user passwords in the clear (yes, Pen Tests still find this). Use a strong password hashing algorithm such as bcrypt and salt each password with a long, random, unique, string.
  5. Strongly protect the application database credentials and make sure they are unguessable. Storing credentials in the clear in a configuration file is not secure (but often done). Use a key vault or other secure means of storage.

4. Privilege abuse

Users may abuse legitimate data access privileges for unauthorised purposes. For example, a user in sales with privileges to view individual customer records may abuse that privilege to retrieve all customer records to pass to a competitor.

Good hiring policies will reduce the likelihood of this occurring, but it should be enforced by technical measures and effective logging and monitoring to detect abuse.

  1. User access to data should be rate limited. In the example above, a malicious user could achieve bulk export by accessing many individual records. Limiting the number accessible in a day to a reasonable maximum, access location restrictions and where feasible, time of day restrictions will mitigate this.
  2. Ideally, the application and its databases would not expose interfaces which allow arbitrary queries and bulk export of data.
  3. If there is a business need for, say, data analysts to be able to perform arbitrary queries on data, access to and use of this interface should be logged, regularly audited and limited to as few people as possible.

5. Excessive privileges

If users hold privileges that exceed the requirements of their job function, these privileges may be abused by the individual, or an attacker who compromises their account. When people move roles, they may be given the new privileges they need without those they no-longer require being removed.

This problem can be effectively addressed by a combination of technical and procedural means:

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

  1. Role based access controls within the application which accurately map required access permissions to job function.
  2. Procedures which ensure that when staff change roles, their permissions are updated to reflect this, with those no-longer required being removed.
  3. Regular, but not necessarily frequent, reviews of who holds which roles to confirm the procedures are working, and that the contractor who left six months ago doesn’t still have an active account!

6. Inadequate logging and weak auditing

Logging and auditing are key to deterring and detecting misuse and enabling adequate investigation of suspected data compromise. In this context, logging is the collection of data - and auditing is someone actually looking at it.

When considering your logging and auditing requirements:

  1. Think about what information you need to collect at the application and database query layer. You will have thought about use cases for the system, but also think of misuse cases and the data needed to detect them. If you can build in automatic alerting rules, so much the better.
  2. Consider how your logging data will be secured. If it’s all in the application database and that is compromised, an attacker could wipe or falsify the log data. Logs can contain sensitive information, so aim to minimise this and secure your log data stores.
  3. Implement procedures for auditing the data collected so you know when something is amiss. Make sure logged information can be displayed in a meaningful way.
  4. Consider whether you could justify implementing network-based audit appliances which monitor all database requests at a granular level and are independent of all users.

7. Denial of service

Network level Denial of Service (DoS) attacks from the internet, can overwhelm your system regardless of the capacity of its internet connection. Cloud based DoS protection services are the usual defence against this and many offer a free protection tier.

Resource consumption-based attacks, such as repeatedly sending complex search queries to exhaust server resources, require a different approach, such as request rate limiting.

If your application is a highly scalable cloud service, this may expand under attack to maintain availability, but the downside is a big bill at the end of the month for all the additional resources.

8. Exploiting unpatched services

While up-to-date patching won’t make you secure, operating vulnerable unpatched services will significantly increase the likelihood of being compromised.

  1. Make sure you maintain a complete and up to date inventory of the software components in your systems, including third party and open source libraries in use.
  2. Establish a vulnerability management process which enables you to ascertain, on a regular basis, what vulnerabilities are present within your system(s) and prioritise remediation. This should include subscribing to relevant vulnerability notification services and ideally using an automated vulnerability assessment system (VAS). A VAS can be run by in-house staff and will flag up configuration as well as patching issues, so it is a very worthwhile investment.

9. Insecure system architecture

While controls against specific database threats are important, they must form part of a design which is secure overall. This is a big topic, but some pointers are given below:

  1. While boundary protection is important, so too is defence in-depth, to limit the impact of initial compromise. Think of scenarios where an attacker has gained an initial foothold, e.g. by dropping a web command shell on your boundary web server. What could they do next? You can test scenarios out for real by adding them to your next Penetration Test.
  2. If your database contains mostly data used internally but has a subset of data available externally, consider pushing the external data to an entirely separate database with its own external application. That prevents compromise of the public interface impacting internal data.
  3. Review the security of the management interfaces to and within your system. Internet-facing remote access services must be properly designed and robust. Internal management networks must not enable bypass of tiered security controls.

10. Inadequate Backup

Theft of database backup tapes and hard disks has long been a concern, but new threats to the availability of data have arisen and these must not be ignored.

  1. All backups should be encrypted to protect confidentiality and integrity of the data, and this must include proper key management. Keys must not fall into the wrong hands but must be available when needed to restore data.
  2. Ransomware is now targeting servers and their databases as well as user machines. If your backups are all online and reachable over a file share, for example, ransomware will encrypt them.
  3. Resilience within cloud services, e.g. geo-replication, is not the same as backup. It’s possible for an attacker to delete so much cloud infrastructure and customer data that an organisation can’t survive, as some have already found.

Make sure that your backups are not subject to the same threats as the live data and that full compromise of the live data environment cannot also compromise your backups. And do test your restore procedures, regularly.

Conclusion

If you follow the guidance above, you will have mitigated many threats to your data, but ideally you would perform a risk assessment and base your controls on the findings.

The UK National Cyber Security Centre publishes very accessible security guidance for businesses of all sizes, and perusal of their web site is highly recommended.