Seeing Through The Deception - A Model for Detecting High Interaction Honeypots in the Wild
A honeypot is an intentionally vulnerable computing system designed to deceive attackers into revealing their tools and techniques. The objective of deceiving attackers is either to develop new countermeasures or distract them from targeting production systems. But do honeypots work, are they effective in these roles?
The honest answer is no one knows. The reason why no one knows is honeypot technology has a hidden bias- it only captures attacks directed to it. This bias might be self-evident but a critical side effect might not be so obvious. Spitzner introduced honeypots in 1999 and within five years research appeared asserting a method to detect and attack them. There are two reasons underpinning the significance of such a discovery.
First, defenders deploy enough honeypots that adversaries have a vested interest in understanding how to attack them.
Second, and less obvious, attackers must have a reliable methodology to detect honeypots.
To that end, we don't know how many attackers don't attack honeypots. Think about it: do you know how an attacker might detect your honeypot or differentiate between deceptive and legitimate systems on your perimeter? After all, attackers have a vested interest in understanding how to detect honeypots in the wild. They don't want to waste time. They don't want to increase risk without raising the potential payout. They want to maintain confidentiality of their tools and techniques.
Oddly enough, existing research is not clear on what network or system attributes reveal a high interaction honeypot. Consequently, no one knows to what extent attackers may be detecting and avoiding honeypots. At best, this limits the ability for researchers to develop more robust honeypots which can evade detection. The worse case however is the calling into question the legitimacy of the technology overall.
Accordingly, in this talk I provide a potential model for honeypot detection along with the results of scanning 59,392 IP Addresses during a series of validation experiments. The findings demonstrate a set of characteristics usable as identification criteria.
Dr. Jason M. Pittman is a collegiate faculty member at University of Maryland Global Campus where he serves in the School of Cybersecurity and Technology. He recently served as an associate professor at High Point University in the computer science department. Previously, he was at California State Polytechnic University (Pomona) and Capitol Tech. University. Pittman holds a Bachelor of Arts in English Literature with a secondary in Biology from Malone College. He received his Master of Science in network security and Doctor of Science in information assurance from Capitol College.
His areas of expertise are cybersecurity and artificial intelligence. His research interests include secure cloud architectures, artificial immune systems for cyber network defense, and information privacy. He brings with him ten years of experience at other academic institutions as well as more than 15 years of industry experience. The majority of these years have been spent at tech focused startups, most notably as Vice President of Security Research and Development at Silent Circle. He is an active scholar with over a hundred books, essays, journal articles, invited lectures, and conference presentations, collectively in his portfolio.