IT experts today are faced with a myriad of ethical decisions. Because of their unique place in the enterprise - sitting at the crossroads of business users, the applications they use and the data they consume - they have ample opportunity to both serve and injure the enterprise they work for. If a business application such as finance, billing, or the HR system crashes, it’s IT to the rescue. It’s all in a day’s work. But at the same time, these ‘super powers’ also give IT professionals opportunity for more nefarious pursuits.
Take the HR system for example. Oftentimes IT experts enjoy administrative privileges to these (and other vital) systems so that they can recover the app should it fail. But those same privileges also give them access to look up compensation details of virtually anyone in the organisation. Should they do this? Probably not. Do they? You bet.
A recent survey by One Identity found that two out of three administrators with elevated access admitted to ‘snooping’ for data that they did not need to perform their job.
But is this really snooping?
One way to view IT activity is through the comparative lens of the real world. How would IT activities be viewed if an analogous activity occurred in the physical realm? In the physical world, your office, your desk, and your filing cabinet all belong to your employer. It’s understood that anyone from your manager to the friendly folks from facilities can randomly enter your office and rummage around in your desk.
To be sure, this sort of search would be frowned upon by everyone from the employees to the leaders, but it could happen. However, for some reason, we feel the computer is our property and the data contained within is ours. At most organisations, your computer is actually your employer’s property, but we still feel personal ownership. So, when an IT expert randomly searches your files, we feel violated - but should we?
Setting aside the fact that IT experts snoop, as stated earlier, if your organisation is typical, you are subject to an ‘equipment use’ policy that states that ‘your’ PC or laptop actually belongs to your employer and as such, you shouldn’t conduct personal activities on the computer.
What happens if the IT expert ‘snoops’ on a computer and finds evidence of activity that places the enterprise in violation of some regulation, for example, copying customer data into an Excel spreadsheet for a marketing campaign, which could be in violation of the EU General Data Protection Regulation (GDPR)? Is IT obliged to report it?
What if they found evidence of a felony? Are they obligated to report it to management, HR or the authorities? All of this is predicated on the fact that such reporting would expose their snooping - a difficult ethical dilemma to be sure.
Educators in peril
In the US, lawmakers have, to some extent, solved this dilemma. In education, educators are required by law to report any incident of suspected child abuse or endangerment. Oftentimes this puts the educator in peril. They may end up in a court of law or perhaps branded as a ‘whistleblower.’ In addition, the US has ‘Good Samaritan’ laws that offer legal protection to people who give reasonable assistance to those who are, or who they believe to be, in peril. Are either of these perfect? Of course not, but the intent is moving in the right direction - an attempt at standardisation, so all parties understand respective expectations.
The IT landscape is becoming far more complex with the advent of new technologies such as the internet of things (IoT). Today, many employees wear ‘fitness trackers’ at work to count steps and measure calories burned. In general, these communicate with smartphones, which in turn are connected, via a company network, to the internet where data is uploaded, stored and analysed. Whose data is that? Does IT have a right to access that data? They certainly can. And IT admins admit they do snoop.
The issue will become even more complicated as health-tracking IoT devices proliferate beyond counting steps. The near-term future holds internet-enabled heart monitors and pacemakers, glucose measurement devices, and more, all of which will be internet-enabled to provide real-time communication to healthcare providers. Should the enterprise capture this data? Will they use it to determine compensation or hire/fire decisions?
The influence of government
Another ethical vector pointed squarely at IT is the influence of government regulations and mandates. This is on display most vividly in the UK where pundits repeatedly argue that internet traffic should not just be monitored, but also censored. In fact, the UK Government is threatening to mandate that ‘platforms’ such as Facebook, LinkedIn and Twitter use the UK tech to filter out terrorist content.
While I think that we can all agree with what is and what is not terrorist content, once this tech is installed and potentially managed by governmental policy makers, who determines what is objectionable? Rival political parties? Content from those whose lifestyles they may disagree with? Likely? I doubt it. Possible? Who ever thought reality-star and Twitter-addict Donald Trump would lead the free world... more or less?
This article highlights just a few areas where IT and ethics collide, but why is this so hard? Why hasn’t this already been solved? There are a few issues at play here. First, there is a chasm between the depth of technical understanding of those that make policy and those that live it.
Governments and regulatory agencies would be wise to consult more closely with ‘rank and file’ IT experts to understand the ramifications of their word choices. Perhaps the best example of where inclusion resulted in better regulations is the payment card industry’s (PCI) standards for secure credit card transactions.
The regulation is very explicit on both the intent and ways to enact the regulation. Other regulations, particularly those coming from government (such as GDPR), as opposed to business-sponsored organisations (like PCI), are vague in their implementation.
It’s difficult for a single enterprise to solve vague policy issues. To fill that gap, enterprises should clarify and document their expectations via an IT code of conduct. Several of these exist, such as the one from SANS, a cooperative research and education organisation reaching more than 165,000 security professionals worldwide.
Organisations should also enact a strategy of ‘trust, then verify’ to protect both the IT experts and the organisation from litigation or compliance findings. There are several classifications of products and activities that can assist in this pursuit:
- Privileged access management solutions help organisations control who has access to the keys to the kingdom.
- Log management collects every activity (in computer parlance, the audit log) that is performed on a computer or device. Generally, these products offer security and non-repudiation, ensuring the logs are not tampered with.
- Continuing education for IT staff on the expectations and ramifications of their actions allows organisations to protect both IT and the business.
Regardless of the solution deployed, organisations should evaluate their usefulness beyond just protection; security products are increasingly being built to make the business more agile and productive.
Many industries have codes of conduct specifically to mitigate the ethical dilemmas those professionals face regularly. The time has come for IT to enact a similar code of conduct to give IT professionals, their employers and the user community at large the confidence and an understanding of what they can expect from IT.