“Trust, but verify,” goes the old saying. Fished, no doubt, from the same pool of wisdom that includes “caveat emptor” and a school of other supposedly common sense quips that nobody seems to follow when it counts. When it comes to critters that land in our inboxes, the smart practice would be to “Distrust until re-verified.” Social engineering attacks, such as email phishing attacks, are becoming more and more sophisticated. It’s becoming increasingly difficult to spot the malicious phishes, even for those of us on guard. Consider two examples of smelly phish that jumped into my inbox.
The first one purportedly comes from Chase Bank. I’ve done business with Chase, so the first hurdle was cleared by chance. The logo is accurate as are the colors and most of the layout. The reply email address also looked plausible, spoofed to appear to have originated from the email addresses seen on actual messages from Chase. As a consequence, the message highlighting filter I had set up “correctly” flagged this messages as something from the bank and in need of attention. Something I set up for convenience could potentially work against my favor by giving a false sense that the message is authentic.
Fortunately, the email server already did a good job of identifying this message as SPAM (notice the message subject, modified by the system to include the warning “POTENTIAL SPAM”.) and routed the message to a folder separate from my inbox. The second example, however, was missed by the server’s SPAM filters and did end up in my inbox.
This is interesting because it is a social engineering attack (phishing) leveraging a social networking site. A subtle attribute of this attack is that people are inclined to expect unsolicited messages from strangers on social networking sites. Again, the appearance is mostly accurate. Nonetheless, there are several things that aren’t quite right. First off, the link text doesn’t match what I generally see coming from legitimate LinkedIn messages. The entire URL as the link text isn’t something normally seen. Presumably, it is included so that the recipient sees the “www.linkedin.com” domain in the link text (different from the hidden link URL) and therefore derives some measure of subconscious confidence that the message and the link are authentic. The actual link in this case goes to a .ru domain.
Secondly, the message was sent to “email@example.com”. An advantage of running my own email server is that it can be set up to accept any user name on a particular domain. If a particular web site, say for example the now defunct Aloha Airlines, requires an email address to use some part of their site, I can create it on the fly and configure a filter later to redirect such messages as needed. It also tells me whether or not the people I do business with have sold their email lists or suffered some sort of compromise. In this case, I get plenty of SPAM sent to the alohaairlines geckopad user account and did so even before Aloha Airlines went belly up a few years ago.
Seems obvious when cues and clues like this are pointed out. However a recent Infoworld article illustrates the danger of seemingly intelligent people acting carelessly.
“Then over drinks one day, a buddy who is a security consultant casually mentioned that human compromises were just as common as technology vulnerabilities.”
Keen to quantify this collective brain fail, the admin’s team set up [an email phishing] test.
And the results?
“Now I know why those Nigerian princes keep bothering people,” the admin says. “Our current malware technology caught only 58 percent of our home-brew phishing mails. On top of that, because we didn’t use the usual Nigerian-prince or $1-million-estate-up-for-grabs schemes, we managed to get 64 out of 138 to click on our ‘malware’ link.”
Moral: Educate your users about social engineering, because rich Nigerian royalty, or corporate data raiders, can get you no matter what kind of antimalware you have.
And certainly educate yourselves. Continuously.