Channels ▼
RSS

Security

Why Phish Should Not Be Treated as Spam


More than 500 million phishing e-mails appear in user inboxes every day. While this number pales in comparison to spam (which accounts for almost 70% of all e-mail traffic), spam is mainly a nuisance, whereas phishing can lead to costly security breaches. In the U.S. alone, phishing attacks on customers have been reported to result in direct financial losses of several billion dollars per year. But for corporations and government organizations, this is just the tip of the iceberg, as more-targeted "spear phishing" attacks can lead to potentially devastating security breaches, loss of sensitive data, and significant financial losses.

Anti-Spam Vendors: Comparing Apples with Oranges

While most anti-spam/anti-virus vendors have repurposed their filters to also catch phishing e-mails, their solutions rely primarily on manually maintained "black lists." To minimize the risk of flagging legitimate sites, these black lists typically come in the form of fraudulent URLs that are manually vetted by people. By their very nature, these black lists are always one step behind, lagging by at least several critical hours and sometimes days. During that lag time, many phishing e-mails will go undetected by spam filters and many of the malicious websites to which phishing victims are directed will not be flagged by their browsers — as the browsers rely heavily on black lists, too. Yet studies have shown that during regular work hours, 50% of users who fall for phishing attacks read their e-mail within two hours of the time it reaches their inbox. This number reaches 90% within eight hours. In other words, a lag in updating black lists by just a few hours can have devastating consequences. "Reply-to" phishing e-mails with no attachments and no links are another example of phishing attacks that often go undetected by anti-spam/anti-virus filters. This is due in part to anti-spam filters' use simple "bag of words" techniques. These are techniques that look for e-mails containing collections of words that are indicative of spam. They are good for catching spam, but they are unable to differentiate between phishing e-mails and legitimate e-mails, since many phishing e-mails are crafted to look just like legitimate e-mails.

Ironically, this state of affairs is not entirely obvious to someone who looks at the statistics advertised by many vendors promoting their anti-virus/anti-spam filters. Many continue to boast about their ability to catch "up to 99%" of malicious e-mail, a confusing statement that clumps together spam, viruses, and phishing attacks. Because almost 70% of all e-mail traffic is spam, and phishing attacks amount to only about 0.5% of the traffic, "catching up to 99% of malicious e-mail" is an ambiguous statement. The consequences of finding an unfiltered spam e-mail in your inbox cannot be equated to the potential consequences of receiving a phishing e-mail. In other words, spam vendors are often comparing apples with oranges. In addition, they fail to tell us about the number of false positives they may end up flagging to reach the 99% performance they boast of. False positives are those legitimate e-mails they sometimes classify as spam and move to your junk box, forcing you to regularly go and check whether an important e-mail found its way there by mistake. In truth, to reach 99% effectiveness, many spam filters require settings that also lead to more false positives, effectively reducing the value of the filter because users are forced to regularly check the content of their junk box for legitimate correspondence.

Phishing Attacks That Make It Past Existing Filters Are the Ones Users Most Likely Fall For

Even when it comes to phishing, not all e-mails are created equal. Studies have shown that high-volume phishing campaigns claiming to come from well-established organizations such as large banks, ISPs, or the IRS, are those that people are least likely to fall for. In contrast, targeted phishing e-mails, or "spear phishing" e-mails, which are directed at small groups of people such as employees of a particular department or even individuals, tend to be much more effective at fooling their recipients. These e-mails have been used to initiate many of the high profile security breaches reported over the past couple of years, as well as many lower profile attacks on smaller organizations. Statistics that simply look at percentages of phishing e-mails caught (including many of the easy-to-detect, high-volume phishing e-mails) fail to recognize these complexities and therefore produce seemingly reassuring numbers that are skewed towards the least dangerous types of phishing e-mails.

A Call for Better Benchmarks and What To Do While We Wait

Given this state of affairs, it is high time for industry and independent evaluation organizations to come up with benchmarks that reflect the significantly higher risks associated with phishing attacks. Simply focusing on spam and viruses is no longer a fair assessment of potential risks. We need benchmarks that look separately at phishing and tests that reflect the lag found in many filtering solutions. Testing a filter's response to a phishing attack days or even weeks after that phishing attack was first launched is not going to properly inform prospective customers about the true performance of the filter. In the meantime, organizations evaluating e-mail filtering solutions should make sure to request trial licenses and evaluate these solutions on live e-mail while looking specifically at the anti-phishing performance of the filters over periods of at least several weeks. Those who cannot afford to conduct such tests should specifically request information on the effectiveness of the solution when it comes to catching phishing e-mail.


Dr. Norman Sadeh is a Professor of Computer Science at Carnegie Mellon University. He is also cofounder and chief scientist of Wombat Security Technologies, a company that has commercialized a suite of cyber security training software products and anti-phishing filtering products.


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.
 

Video