Dr Nathan Dimmock MBCS is an Executive Director in Technology at Morgan Stanley, a leading global financial services firm providing a wide range of investment banking, securities, wealth management and investment management services. He holds a PhD and an MA in Computer Science, both from the University of Cambridge.
A greenfield development project is often held up as the holy grail for technologists. It offers the maximum opportunity for creativity, free from requirements to maintain backwards compatibility and consequences of legacy decisions. However, a greenfield infrastructure build encounters the traditional challenges associated with bootstrapping a new computing environment. Initially only a set of self-contained primitives are available, from which the IT Infrastructure Engineers assemble the tools and services that enable more advanced features to be built.
This article provides an overview of the high level design considerations for the foundations of a secure and scalable IT server infrastructure. The concepts presented here are widely applicable to a range of IT infrastructures and projects, whether they expect to remain small or aspire to grow over time.
Foundational systems
Identity provider
An identity provider — for example Microsoft Active Directory or an LDAP server — which all systems can use for authentication and authorisation helps you avoid two common problems: a single 'admin’ password that is known to many people, or at the other extreme, users having one password per system and thus too many passwords to manage securely. It also provides a central location for onboarding and off-boarding of users as the environment scales, eliminating toil and making it easy to implement good security practices such as promptly revoking access for leavers.
Mandating the use of individual accounts for management operations instead of a shared user also ensures that changes are attributable to a single human, making it easy to understand who created, modified and deleted resources.
A source of truth for configuration (and inventory)
As the team grows, it will be important to communicate the intended state or configuration of systems to ensure things are built and configured correctly. When resolving issues and outages, it can also be very helpful to understand whether a system has deviated or drifted from the proper settings, which of course can only be accomplished if the correct configuration has been defined and documented.
This does not need to be a technical solution; a shared document with change history will suffice. An equally simple, and potentially even better, approach is to use text files stored in the git source code management tool. Git provides powerful change tracking, collaborative working and by hosting your git repositories in GitHub or an on premise equivalent, you can add powerful workflows such as four-eyes reviews and immutability (remembering that there are usually backdoors for the person who has admin rights). Starting out with structured text file formats such as YAML also makes it easier to transition to automated configuration management because these formats can be easily parsed and consumed by automations.
Core services
Once the foundations have been laid, it is time to start building the first infrastructure services.
IP address management (IPAM) and domain name services (DNS)
As devices (which may be virtual) are added to the new network, there will be a need to track which devices are using which IP addresses to ensure uniqueness. Users will want to access these devices via a name rather than hard to type and remember addresses (especially if using IPv6) so a local DNS service will also be required. Using names in configurations rather than addresses also reduces the work required to relocate devices on your network later, an inevitable event as the infrastructure evolves and matures.
Encryption key management and certificates
Your customers and other stakeholders will expect you to follow reasonable security best practices, and there is likely even a legal requirement to do so if processing personal data (EU GDPR Article 32). This means you will need to ensure at an absolute minimum that any passwords (or other secrets) are only transmitted using secured protocols, for example HTTPS instead of HTTP. With modern web browsers also strongly pushing users to only access HTTPS sites, a public key infrastructure (PKI) that permits automated issuing and secure distribution of certificates will enable the efficient, reliable and secure management of keys and certificates for as many services as required.
If managing your own hardware then the use of encrypted drives can simplify secure disposal and provides an additional layer of protection against a possible data leak due to equipment theft or mislaid drives. Encrypted drives require keys to be entered before the operating system can boot.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
A key management server accessed using KMIP can provide remote key storage which ensures the data cannot be accessed by third parties if the drive is removed from your premises, while avoiding the need for a human to manually enter the key each time the server boots. While all servers storing secrets, confidential business data and personal information should be encrypted, consider whether it is sensible to encrypt servers providing core services which do not store these things, otherwise there is the potential to introduce circular dependencies. In particular, some authentication protocols based on X.509 certificates may, depending on configuration, assume the availability of a DNS resolver so your DNS service must be able to startup independently of other services in this scenario.
Backups
Solid state drives brought about a step change in reliability compared to mechanical hard disks, but operational backups are an important insurance that should allow you to recover your nascent technology platform from issues such as administrator errors, rogue employees and ransomware. It is important that these are isolated from the main infrastructure to prevent the data loss incident affecting the backups too. For instance, while a database replica in another region may provide a second copy of the data, if that replica is kept up to date with the primary then any logical error—perhaps a command to delete all the tables instead of just one—will instantly propagate to the replica with no chance of recovery.
As an insurance against disaster, implementing a comprehensive backup solution can be easy to postpone or de-prioritise in favour of more exciting or client facing objectives. Do not let perfect be the enemy of good enough! IT systems often support very simple configuration or data export capabilities which can be scripted to be run periodically from a Linux server’s crontab. The exported data can then be uploaded to low cost offsite storage, for example Amazon S3 or Backblaze B2, paying careful attention to the access permissions on these services to avoid an embarrassing and costly data breach.
Conclusion
Bootstrapping is acknowledged as a hard problem in our field, requiring practitioners to use primitive tools and techniques rarely seen in mature systems. The same is true when building a new infrastructure in a so-called greenfield, and this article has described my recommendations for what to prioritise and focus upon to build strong foundations. An important finding of my own work is that implementing good practices early on leads to a virtuous cycle and the creation of a strong culture that will pay dividends in the future regardless of whether the project remains small or grows exponentially.