Honeypots tell you who's attacking. But to catch individuals -- including suspected insiders -- honeytokens let you home in Last week I talked about the importance of deploying honeypots to catch malicious hackers and malware. But there’s a related tool that’s craftier and even easier to deploy: the honeytoken. Honeytokens contain digital data created and monitored solely as indicators of digital theft. They can be real data containing a “marker” — fake data that simply doesn’t exist in the real world, at least within a given enterprise. They can be used to track malicious outsiders or insiders engaging in unauthorized activity. There are many types of honeytokens and many methods of tracking; choose yours based on your specific concerns and threat models. [ No honeypot? Don’t bother calling yourself a security pro. ] Honeytokens have been used since the beginning of computer crime defense — including a clever version hatched by Clifford Stoll, early honeypot user and author of “The Cuckoo’s Egg,” a book based on his cyber crime fighting adventures back in 1986 and 1987. Stoll, on the cyber trail of a German hacker, created fake content that led the hacker to believe he could request additional information on a particular subject through the mail. The address led to Stoll. The hacker downloaded the fake content, read about the information request, and sent Stoll his real return address. Stoll was able to convert a hidden, online digital identity to a physical address and person. My honeytokens should be so lucky! Fake out the bad guys Many companies have used simple honeytokens composed of fake email addresses, user accounts, database data, or even false programs and executables. Fake email accounts have long been used to capture or get early warning of spammers. Many companies create fake email accounts and either leave them sitting in plain sight on the mail server or place them in non-publicly accessible locations with a public-facing Web server. The idea is that the fake email address is never used, and thus would have no valid reason for receiving spam. Receiving unrequested email to the honeytoken email address indicates that someone has accessed the company’s internal email list or compromised a public Web server. Another approach is to insert fake data that’s highly unlikely to exist in the real world into a real database. For example, honeytoken names could be nonsensical, such as Barbx Zoologic, Roger Exinegg, and so on — or they could be celebrity names that have no association with the company. One enterprise I know used the entire Kiss lineup: Ace Frehley, Gene Simmons, Peter Criss, and Paul Stanley. It worked! Attackers sucked up the band member names in a malicious data haul and gave the organization the clues it needed to close the right exploit holes. A few companies go as far as creating fake executables, which if stolen by the attacker and executed, will “dial home” and send details of the hacker’s environment, such as the IP address, found names, and so on. I’m not a big fan of these types of honeytokens, for two reasons: First, compromising an attacker’s machine with your own Trojan and sending back information is illegal in many countries; you can’t break into a thief’s house just because he broke into yours. Second, I can’t believe that attackers who are smart enough to break into your environment and steal your data would randomly execute a program without some sort of protection, such as blocking all ports to the Internet. For a more effective approach, you can try honeytokens composed of real data containing hidden, embedded links that can dial home. A simple example is a rigged PDF file, which, when opened, dials home using JavaScript. But even that approach is a little too likely to be foiled or discovered. Use Web beacons If you really want to catch a thief, why not think like a marketer? Online advertisers are great at tracking us and our behavior over different websites, devices, and time. One popular technique is the Web beacon, a Web link to a very small embedded object such as a one-pixel, transparent picture file. A Web beacon can be included in all sorts of real documents and is not likely to be noticed. But when the viewer opens the content with the embedded Web beacon link, the computer will dial home to provide the Web beacon graphic. When the viewer’s computer connects back to the originating server, the server’s administrators can discover information about the viewer, including the Internet egress IP address, operating system, browser version, and sometimes email address as well as other identifying information. The problem with hidden, embedded Web beacons is that, again, the thief can view the information in a safe environment not connected to the Internet, block outgoing ports, and so on. Leave cookies lying round As a honeytoken, the humble browser cookie may be a better choice. If your honeytoken plan includes the ability to place a cookie on the attacker’s computer, you can track the attacker just like Google and its DoubleClick entity do on a regular basis. Alternatively, you can use Adobe Flash tracking mechanisms. Wait, you ask, what hackers would be clueless enough to allow Web beacons or cookies to track them? Well, individually, hackers are pretty smart. But groups of hackers, which are behind most attacks today, often have at least one individual who messes up and leads the authorities to discover true identities and physical locations. All it takes is one bad guy accidentally using his nonspy fake email address on the wrong website that can then be linked to his secret identity. This happens all the time. If you don’t believe me, see Brian Krebs’ website, where you can find many examples of how he successfully tracked “secret” online identities to the real person. It’s not as hard as you think. Trap canaries Honeytraps have also been used to identify insiders leaking information to unauthorized outsiders. In the so-called canary trap, you send (or allow access to) a nearly identical copy of a document to each suspected leaker within a group of suspects. Each honeytoken document is identical except for a unique marker, which ties the receiver to that document. For example, the Screen Actors Guild (SAG) grew tired of its members leaking copies of movies submitted for Oscar consideration to people outside of the organization. This has happened for a long time, but such instances increased in the digital age. SAG told its members they were specifically marking each movie sent to them and not to share the copy. Turns out that fair warning is not enough: At least one SAG member was caught leaking movies and was punished accordingly. I’ve seen canary trap markers that were simply a few unique bits on a retouched digital photo. In one case, the encoded values were the plain text representations of the suspect’s employee ID. But unless you are looking for it, you’d never know. Not that canary traps are foolproof — all a suspicious perpetrator needs to do is compare the digital data to another representation and recapture the data in digital form. For example, in the cased of an encoded picture file, you could print out the picture, take a picture of it, and reconvert it to another file format before sending it along. For every offense there is a defense. A canary trap can also be used to identify specific compromised resources. For example, one of the most popular examples is to create fake emails that contain unique URLs, which, if read by attackers, would lead them to probe the link. Each of these unique emails can be placed in high-value targets — for example, a CEO or CFO email inbox. If the attacker gains access to the inbox, the email would encourage the hacker to try the link. Sitting on the receiving side of the URL request is a fake website (a honeypot), which alerts the incident response team that the email inbox has been compromised. Honeytoken sticking points Using honeytokens is considered a low-cost, high-value way to find a previously “undetectable” hacker. But there are challenges. The first and most common challenge is in making the honeytoken seem real and attractive to attackers. If you’re going to create a canary trap, you’ll need to devise a way, hopefully automated, to uniquely mark the honeytoken. Then you need a way to track which honeytokens you placed where. It’s easy, especially over the years, to lose track of where you placed fake documents and what threats they were designed to flush out. The biggest challenge of all is to devise an alert mechanism when someone takes the honeytoken bait. For some placements, it can be as easy as turning on file access auditing. Other deployments will require dial-home mechanisms (and all their inherent risks and challenges) or separate detection of the honeytoken outside of its original placement. Some companies use host intrusion detection systems, some use sniffers, and still others use advanced data leak protection systems. There’s even a small cottage industry of firms that scour the Internet looking for evidence of your company’s honeytoken data. It’s worth working out the kinks. If you’re tired of the same old computer security defenses failing and you want something that really works when managed propery, look into honeytokens. SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe