Question: You’re driving a bus with 30 people on board. The bus stops to pick up three passengers and drop off 12. At the next stop, it picks up five passengers and drops off seven. Further along, five passengers get on, but only one gets off. Twelve minutes later, the bus finally arrives at the terminus. What is the driver’s name?
Brain teasers like this work because people assume certain information about the situation. If you haven’t figured out the answer to the puzzle, it’s because your brain has assumed it’s a maths problem when it isn’t. The key to finding the driver’s name is in the first word (‘You’re’), but your mind may have discarded that information while it was preparing to add and subtract numbers.
Making assumptions about situations is a normal human trait - otherwise we’d all be feeling insecure and constantly checking our surroundings every few minutes. And we’d never get any work done. But people can exploit these assumptions. Take a recent example where email and social networking accounts were hacked and messages sent to the victim’s friends:
‘Please help. I’m on holiday and all my money and plane tickets have been stolen. I’M REALLY SCARED. I can’t pay my hotel bill and the manager is threatening to throw me out. PLEASE can you lend me $200? Send it to ...’ Those who fell for the scam weren’t thinking of why their friend didn’t mention a holiday - they just saw that a trusted friend was in danger. They were adding and subtracting bus passengers. All it took was the right message delivered in the right way.
Those of us who were in the IT industry in May 2000 will remember the Lovebug (or ‘I Love You’) email worm, which also played on emotional connections with a friend. But for a younger generation, this type of thing is new.
This illustrates one of the reasons why everyday security advice might only be partially effective. Rules or warnings that are given before someone encounters a situation can be forgotten if the person doesn’t personally identify with the situation. For businesses that depend on people (staff or customers) to follow safe computing practices, I believe this is an underestimated problem.
Big companies have firewalls, malware controls, patch management systems and event monitoring, but they still suffer from security breaches - often due to a human factor. In organisations with tight security, there are sometimes people who assess the ‘human threat’. But elsewhere, what happens when senior managers find out about a security breach and insist that it should never happen again?
Yep, we upgrade the software or write a new policy. But you should also look at what staff would do in practice in their jobs. One of the reasons is to ensure that staff don’t give away money or information without good reason - whether they realise what they’re doing or not.
Social engineers can obtain information from people without directly asking. For example, finding out about a company’s IT security measures in preparation for a hacking attack. Several people at the company could be contacted and asked to complete a telephone survey, which is mostly about general IT, but has a small section at the end asking what firewall, anti-virus products and other security tools the company uses. Add a sweetener to tempt them - maybe a prize draw for an iPad if they complete the survey.
If the information is forthcoming, it might indicate that staff at the company aren’t security-aware. If the staff say that security details are confidential and won’t participate, then they probably work within a tight security framework. But the main information gained is a valuable insight on what level of sophistication is needed to attack the company’s IT and how strong the defences may be. Any answers to the questions are the icing on the cake.
Kevin Mitnick (one of the most widely known hackers) discovered in his youth that it’s sometimes far easier and less risky to manipulate humans who have access to information, than to try and break into systems straight away. That way, you don’t have to deal with any security technology that will log and track your early access attempts. I’ve seen a ‘security survey’ taking place in my office, and the only defence was for everyone to refuse to participate at every stage.
Why aren’t social engineers widely mentioned in the media? I don’t believe they are excluded, and I suspect they are already reported as hackers, fraudsters or con artists; social engineering is a relatively new term. It’s also difficult in a short article to explain how a social engineer could ‘break in’ by using the power of suggestion - it sounds more like something from a Derren Brown TV Spectacular than real life.
So how do you steal company information without touching a keyboard? There are many ways, but there are some common elements that make most types of social engineering attack easier to pull off.
For example, new starters at a company are often still finding their way around and meeting people. They are more trusting and reliant on others, and less likely to know the company culture and security procedures. They may have had induction training on their first day, but then, how much do you remember from your induction? Because of this, new starters are more vulnerable. And finding them can be easier than ever.
If someone has a personal blog, they will sometimes talk about work. Because of media horror stories about social networking, they probably won’t post exact details of where they work or what they’re doing in their job. But it’s sometimes possible to pull enough pieces together from different social media accounts. Let’s say that a person called Amy has posted the following on her blog (in brief):
March: ‘Just got a new job.’
April: Regular postings about life and family news
May: Only one entry - ‘It’s very busy at work. Can’t say why - it’s all secret.’
June: No blog entries at all.
July: Only one entry - ‘Away on business. Promise to post updates soon.’
August: ‘Going away for a long weekend. Looking forward to a two-week holiday first week in October.’
Amy’s blog has links to her Google Picasa account, where she posts her photos.
The social engineer, Steve, has been hired by one of AnyCorp’s underhanded competitors to confirm rumours about AnyCorp expanding into the Americas with a new product. Steve looked at AnyCorp’s LinkedIn page and saw that Amy was a new starter.
On finding Amy’s personal profile, he noticed the links to her Wordpress blog and from the blog to her Picasa account - where photos of New York were posted in July. Steve now has evidence of a project in America and knows that Amy is working on it (new job, secret project, busy at work, business trip, photos of New York).
This isn’t really social engineering though - it’s all information in the public domain. But if Steve calls Amy on the Friday during her long weekend break, she probably won’t answer. Instead, a colleague will probably be helpful and give the name of someone deputising for Amy in her absence. That’s the name of a second project team member gained. And that’s social engineering.
These people are the entry points to further details of the project - the equivalent of open firewall ports. What remains is to find a way through. Perhaps Steve would place a call to the deputy, claiming to be from the press office and asking about a big press release that Amy mentioned before she left.
Of course, there will be no press release to go out (if there is, it’s a fantastic result), but Steve can ask for the basic points about the project - such as when a press release is most likely to be due (read: when the American expansion will go public). Again, small details that are seemingly unimportant can be assembled into a rough project plan - justifying Steve’s pay cheque and helping the strategy of AnyCorp’s competitor.
How does a company prevent information leaking in this way? It’s difficult to stop it, and just as difficult to detect when it does leak - especially if employee’s personal social media accounts are involved. There may be some merit in advising staff about social engineers, and perhaps to separate career and home social media accounts.
And staff who are asked about confidential information shouldn’t give it out unless they personally know and recognise the caller. Policies can formalise this latter point, and an information security audit can tell you how well it is working. But the most important thing is for everyone to be aware of the risks.
Don’t let your mind believe that hacking is purely a technical problem. As Kevin Mitnick wrote about social engineers: ‘If you think you’ve never encountered one, you’re probably wrong’.
Michael Pike is an information security consultant who works with companies of all shapes and sizes to manage the risks they may - or may not - be aware of.
- The art of deception - Kevin Mitnick & William L. Simon; Wiley. A reformed Mitnick explains social engineering methods and how they work.
- Cyberpunk: Outlaws and hackers on the computer frontier - Katie Haffner & John Markoff; Simon and Schuster.Real-life tales of social engineering and hacking; old, but much is still relevant today.
- Trick of the mind - Derren Brown; Channel 4 Books. Goes into the workings of the mind, and how it can be fooled without us knowing.
- Rogue Trader - Nick Leeson; Warner Books. How damning evidence can be covered up with charm and persuasion.