Skip to main content

How To Cyberstalk Potential Employers

        This article is less diabolical than its title might imply. Essentially, I want to give the reader some tips for finding more information about a potential employer than the job listing may reveal. Sometimes the job description gives all of the information you could want,
but often it may not say much about the organization's network or development environment. Sometimes job descriptions are written by people who don't even know what the terms they are using mean (10+ years of C# experience anyone?). You could scan their whole network with Nmap, but triggering a few thousand IDS alerts is probably not a good way to ingratiate yourself with an employer. So this article will cover more passive ways you can obtain information about their company infrastructure.  This is going to be a mile high overview just to get your mind working in the creative ways it takes to investigate companies passively. Now, on with how to cyberstalk.
Other job postings
        This is a big "duh!" so I won't spend much time on it.  Sometimes the best way to find out more information about a company's environment is to look at job posting other than the one you are applying for. Just because your job posting lacks detail does not mean all of them do.
Mail Headers
         Assuming you have had some correspondence with them,  one of the first and most overlooked ways to find out more information about an employer is their e-mail's Internet headers. This will be the most technical part of the article so bear with me. What information you can gather from these headers is varied, and sometimes you won't find any useful information at all. Reading mail headers is sort of a black art, but I'll show you two header examples that will give you an idea of what to look for (I've tried to sanitize these headers as much as I can; when an IP is a valid one it may not be the IP shown in the original header). Not all mail systems will return all of the information shown, so your results may vary greatly. To view these headers in Gmail click on an individual message's dropdown menu and choose "Show original", in Outlook, go to View->Options and look at the Internet headers; in all other e-mail reader, figure it out.
E-mail 1E-mail 2
Received: by with SMTP id 2cs208916wxz;
Fri, 15 Jun 2007 09:31:08 -0700 (PDT)
Received: by with SMTP id w1mr3175428wal.1181925067563;
Fri, 15 Jun 2007 09:31:07 -0700 (PDT)
Return-Path: <>
Received: from ( [])
by with ESMTP id n20si5908494pof.2007.;
Fri, 15 Jun 2007 09:31:07 -0700 (PDT)
Received-SPF: neutral ( is neither permitted nor denied by best guess record for domain of
Received: from ( [])
by (Postfix) with SMTP id 5A2B719AA6F
for <>; Fri, 15 Jun 2007 09:31:04 -0700 (PDT)
Received: (
qmail 14131 invoked from network); 15 Jun 2007 16:31:05 -0000
Received: from unknown (
by ( with ESMTP; 15 Jun 2007 16:31:05 -0000
Message-ID: <>
Date: Fri, 15 Jun 2007 12:33:21 -0400
John Smith <>
Thunderbird (Windows/20070326)

MIME-Version: 1.0
Subject: Job Opportunity
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Mon, 30 Apr 2007 10:42:09 -0700 (PDT)
Received: by with SMTP id k9mr11599396pyj.1177954929370;
Mon, 30 Apr 2007 10:42:09 -0700 (PDT)
Return-Path: <>
Received: from ( [])
by with ESMTP id f24si6685580pyh.2007.;
Mon, 30 Apr 2007 10:42:09 -0700 (PDT)
Received-SPF: neutral ( is neither permitted nor denied by best guess record for domain of
Received: from ( [])
by (8.13.6/8.12.10/PMPO) with ESMTP id l3UHfuOs017638
for <>; Mon, 30 Apr 2007 13:42:04 -0400 (EDT)
Received: from ([]) by with Microsoft SMTPSVC(6.0.3790.1830);
Mon, 30 Apr 2007 13:42:00 -0400
Received: from ([]) by with Microsoft SMTPSVC(6.0.3790.1830); Mon, 30 Apr 2007 13:42:00 -0400
MIME-Version: 1.0
Content-Type: text/plain;
Content-Transfer-Encoding: quoted-printable
Received: from ([]) by with Microsoft SMTPSVC(6.0.3790.1830); Mon, 30 Apr 2007 13:41:59 -0400
Received: from ( []) by (8.13.8/8.13.8/IG Messaging) with ESMTP id l3UHfuot019772 for <>; Mon, 30 Apr 2007 13:41:58 -0400
Received: from localhost (localhost.localdomain []) by (Postfix) with ESMTP id 7A4F786593 for <>; Mon, 30 Apr 2007 13:41:56 -0400 (EDT)
Received: from ([]) by localhost ( []) (amavisd-new, port 10024) with ESMTP id 30898-05 for <>; Mon, 30 Apr 2007 13:41:55 -0400 (EDT)
Received: from b2jjones ( []) by (Postfix) with ESMTP id 773CD86279 for <>; Mon, 30 Apr 2007 13:41:55 -0400 (EDT)
Microsoft Office Outlook, Build 11.0.5510
 Produced By Microsoft Exchange V6.5

X-OriginalArrivalTime: 30 Apr 2007 17:41:59.0579 (UTC) FILETIME=[DAAC66B0:01C78B4E]
X-Virus-Scanned: amavisd-new at
Content-class: urn:content-classes:message
Subject: RE: Open positions
Date: Mon, 30 Apr 2007 13:42:00 -0400
Message-ID: <>
In-Reply-To: <>
Thread-Topic: Open positions status
Thread-Index: AceGjvbv5vTw6CZrSCagiFVAzs6I1wEwmbLQ
From: "
Jill Jones" <>
To: <>
        So, what information can I gather from these e-mail headers? Well, assuming this is the first e-mail contact you have had with the company, you now at least have the name of someone working there (I've highlighted the name in orange). This will come in handy as a starting point for the Google searches I'll talk about later. Next, notice the text I highlighted in red. These are the IP addresses/hostnames of the people who sent the message originally. The one in E-mail 1 is a routable IP which I can put into a WhoIs query to pull up more information about the company that owns it (I like to use the site DNSStuff for this, but the *nix command line whois or Nirsoft's Windows tools IPNetInfo and WhoIsThisDomain are also very good). The IP may not belong directly to the company, but at least you will find out more about what ISP they are using. If the IP is owned by the company, you will hopefully find useful names and phone numbers in the contact information that will allow for further Google scrounging. Check out my article "What can you find out from an IP?" for more information on what you can do once you know an IP.  Once you have their IP, you can use it to search your own website's logs to see if they have visited your site, and depending on your logging software you can find out what web browsers, operating systems and maybe even the screen resolution they are using. The IP in E-mail 2 starts with "172.16" which is a non-routable reserved IP.  This tells me that E-mail 2's LAN is most likely behind a NAT box of some kind. From the host name in E-mail 2 I can tell what sort of naming conventions they use for their workstations. Another useful thing to try is a basic Google search for the IP or hostname listed. If you are lucky this may return public logs of sites that the workstation has visited. The text highlighted in blue tells me about what mail client they are using, including the OS and exact version. The green text gives me what type of mail server they have. Even if there's not much information in the headers, it should still give you a starting point for some Google scrounging.
Google scrounging web sites/forums/Usenet posts
        Many companies leave information about themselves all over the public Internet. Johnny Long wrote a great primer on using Google to recover obscure information called "The Google Hacker's Guide" which is available at the following URL:
Johnny's book "Google Hacking for Penetration Testers" is also very good, but the primer above should be enough to get you started. One of the most useful Google operands is "site:" which lets you specify the domain you want to search. For example, if I wanted to find mentions of a company on a certain site I could use the search:
and it would return all of the pages Google knows about ending with the domain name "" and containing "ComanyXYZ" in the content/title/meta tags. I've also had great luck doing a Google search with my soon-to-be-interviewer's name and their city of residence. Using this method I've found the interviewer's blog or social network profile before, and using the information from those resources I've found more pages with useful information about the company. For example, searching for a person's name may take you to a site where they have used a certain screen name or email address, and searching for that screen name or email address may lead you to a forum, blog or usenet post that the person has made that reveals more information about them or their company. Another useful search to perform is:
Notice the minus symbol before the "site:" parameter. This query will return pages that contain the text, but do not reside on a server with the domain, thus filtering out a lot of noise. I've used this technique before to look for company e-mail addresses, found a post on a car forum by a former employee, did a search for the former employee's screen name and then found his current email address so I could ask him about his old company. It's all about taking one piece of info and building on it till you have gobs of information.
        I wish I could give better examples of Google hacking without dropping someone's docs (geek slang for revealing personal information). I've thought of doing a video on it, but I can't think of a good way of doing it without opening myself up to liability. Suffice it to say, reading Johnny Long's  "The Google Hacker's Guide" should get you thinking in the right direction.
Surfing the company's site
        Just surfing a company's website will give you tons of information. By looking at the URLs of their pages you can quickly tell if they use PHP, Active Server Pages, J2EE, ColdFusion or some other dynamic web sites language. If you want more passive information about a company's web environment, looking at the headers their site returns will give you a wealth of information. Most of you should know how to do a banner grab with telnet, but a better and more passive way is to use the LiveHTTPHeaders Firefox plug-in from:
With LiveHTTPHeaders you can quickly looks at the headers HTTP requests return, like the following example I pulled from
HTTP/1.x 200 OK
Date: Mon, 02 Jul 2007 13:18:13 GMT
Server: Apache/1.3.37 (Unix) mod_throttle/3.1.2 DAV/1.0.3 mod_fastcgi/2.4.2 mod_gzip/ PHP/4.4.7 mod_ssl/2.8.22 OpenSSL/0.9.7e
X-Powered-By: PHP/4.4.7
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html
Content-Encoding: gzip
Content-Length: 10696
From this example you can see my hosting server is running Apache 1.3.37, what version of PHP I use and what versions of various Apache mods are being used. It should be noted that many folks useNetCraft to find out this sort of information.
Social Networking Sites
        I have to admit to being a MySpace hater (I much prefer FaceBook) but both social network sites can be useful to job searchers because their search functions will let you find other folks that already work at your target company. Reading someone's profile or blog entry may tell you about some of the tech in the company, but more than anything it's useful for finding more useful terms to use for Google scrounging. If nothing else it gives you a chance to ask a company insider about the work environment so you can decide if you want to work there are not. Just be careful how you talk to people so you don't come off as a creepy stalker (as oppose to a sweet and lovable stalker). If you have found your interviewer's personal site or profile it might have helpful information that lets you setup a good rapport with them during the interview, but try not to come off as creepy. Another social network site that's especially made for career advancement and networking is LinkedIn:
Feel free to add me if you can find me :). Since people often post their resume and job experience on LinkedIn it's a great source of information about a company's IT environment. If you want to know about a company's "corporate culture" it's best to ask a former employee that no longer has a vested interest in the company.
    While I'm on the topic of Social Networking Sites, there are other "Web 2.0" sites that may yield useful information. Going to each one individually takes a lot of time, but there is a way to get many them in one swipe: Rapleaf bills itself as an "email-based reputation lookup" service. After submitting a person's email address Rapleaf will return what information it has about the person. If the email address has never been queried before Rapleaf will ask you to login (registration is free) and will then scour the Internet looking for accounts linked to that email address. You will get an email when the report is ready. I did a search for one of my old email addresses and Rapleaf returned links to my Facebook, Friendster and MySpace profiles, along with links to my Flickr and Amazon Wishlist. Creepy. Also, by signing up with Rapleaf you can filter what people see when they search for your email address, but keep in mind this only protects you from people using Rapleaf, not folks Google stalking you by hand.  (Note: Since I first published this Rapleaf has become far less useful, check out some of the other links I recommend at the end of this article.)
        I hope this article has helped you think in new ways about researching prospective employers. As Tehbizz points out in the BinRev thread, you may want to be careful how much knowledge you reveal you have about a company's internal workings to an interviewer; it may make them paranoid about your intentions. Also, while I've focused on how to cyberstalk potential employers,  potential employers can cyberstalk you in much the same way using these techniques. Those drunken pics of you on MySpace no longer seem like such a good idea, do they? I plan to expand this article over time, so if you have any good ideas email me or post them in the BinRev thread:
I'm especially interested in stories about how you have researched employers. Good luck with your job search.
Useful links
Since Rapleaf is no longer as useful as it once was, check out these alternatives:

Also, TinEye might be useful to you. You can feed it an image and it will try to find others like it on the Internet: This can be useful for finding duplicate images of a person. For example, you may find a picture from the company picnic and searching TinEye for it may lead you to the person's profile on some social network site. 
Maltego: Great GUI for connecting the dots or how people and organizations are related.
Metagoofil: Useful for searching a company's website and extracting metadata from the files there that can lead you to more information about who works there and how they set up their internal LAN. 
Further Research
These links should be useful to you for further research on the subject of how to cyberstalk employers.
First there's a video of Mubix's presentation from Dojo Sec on finding a job in information security:
Second there's a video of a class Brian and I did on Footprinting, Scoping and Recon where we go into depth on how to find out more information about people and organizations:


Popular posts from this blog

Сбербанк и дропы с площадки Dark Money, и кто кого?

Крупных открытых площадок в даркнете, специализирующихся именно на покупке-продаже российских банковских данных, обнале и скаме около десятка, самая большая из них – это Dark Money . Здесь есть нальщики, дропы, заливщики, связанный с ними бизнес, здесь льют и налят миллионы, здесь очень много денег, но тебе не стоит пока во все это суваться. Кинуть тут может любой, тут кидали и на десятки миллионов и на десятки рублей. Кидали новички и кидали проверенные люди, закономерности нету. Горячие темы – продажи данных, банковских карт, поиск сотрудников в скам и вербовка сотрудников банков и сотовых операторов, взлом аккаунтов, обнал и советы – какими платежными системы пользоваться, как не попасться милиции при обнале, сколько платить Правому Сектору или патрулю, если попались. Одна из тем – онлайн-интервью с неким сотрудником Сбербанка, который время от времени отвечает на вопросы пользователей площадки об уязвимостях системы банка и дает советы, как улучшить обнальные схемы. Чтобы пользова

Перехват BGP-сессии опустошил кошельки легальных пользователей

Нарушитель ( реальный заливщик btc и eth ) используя протокол BGP успешно перенаправил трафик DNS-сервиса Amazon Route 53 на свой сервер в России и на несколько часов подменял настоящий сайт с реализацией web-кошелька криптовалюты Ethereum . На подготовленном нарушителем клоне сайта MyEtherWallet была организована фишинг-атака, которая позволила за два часа угнать 215 ETH (около 137 тысяч долларов) на кошельки дропов. Подстановка фиктивного маршрута была осуществлена от имени крупного американского интернет-провайдера eNet AS10297 в Колумбусе штат Огайо. После перехвата BGP-сессии и BGP-анонса все пиры eNet, среди которых такие крупнейшие операторы, как Level3 , Hurricane Electric , Cogent и NTT , стали заворачивать трафик к Amazon Route 53 по заданному атакующими маршруту. Из-за фиктивного анонса BGP запросы к 5 подсетям /24 Amazon (около 1300 IP-адресов) в течение двух часов перенаправлялись на подконтрольный нарушителю сервер, размещённый в датацентре п

DDOS атаки на маршрутизатор и методы защиты Juniper routing engine

По долгу службы мне часто приходится сталкиваться с DDOS на сервера, но некоторое время назад столкнулся с другой атакой, к которой был не готов. Атака производилась на маршрутизатор  Juniper MX80 поддерживающий BGP сессии и выполняющий анонс сетей дата-центра. Целью атакующих был веб-ресурс расположенный на одном из наших серверов, но в результате атаки, без связи с внешним миром остался весь дата-центр. Подробности атаки, а также тесты и методы борьбы с такими атаками под катом.  История атаки Исторически сложилось, что на маршрутизаторе блокируется весь UDP трафик летящий в нашу сеть. Первая волна атаки (в 17:22) была как раз UDP трафиком, график unicast пакетов с аплинка маршрутизатора: и график unicast пакетов с порта свича подключенного к роутеру: демонстрируют, что весь трафик осел на фильтре маршрутизатора. Поток unicast пакетов на аплинке маршрутизатора увеличился на 400 тысяч и атака только UDP пакетами продолжалась до 17:33. Далее атакующие изменили стратегию и добавили к

Сброс BGP-сессии при помощи TCP Connection Reset Remote Exploit

Уже ни раз рассказывал о протоколе ТСР , так что повторятся особо не буду, напомню лишь основные положения: Transmission Control Protocol описывается в RFC 793 и преимуществом его является надежность передачи данных от одной машины к другому устройству по сети интернет. Это означает, что ТСР гарантирует надежность передачи данных и автоматически определит пропущенные или поврежденные пакеты. Что для нас важно из его конструкции? В общем виде заголовок ТСР пакета выглядит так: Программа передает по сети некий буфер данных, ТСР разбивает данные на сегменты и дальше упаковывает сегменты в наборы данных. Из наборов данных формируются пакеты и уже они передаются по сети. У получателя происходит обратный процесс: из пакетов извлекаются набор данных, его сегменты, затем сегменты передаются в ТСР стек и там проверяются, потом собираются и передаются программе. Sequence numbers Данные разбиваются на сегменты, которые отдельными пакетами направляются по сети получателю. Возможна ситуация, к

Как найти реального заливщика

Своего первого реального заливщика, который показал мне как можно скачать деньги в интернет с банковских счетов, я нашел случайно, когда еще трудился в Укртелекоме сменным инженером немного подрабатывая продавая трафик налево , но потом этот человек отошел от дел в связи со слишком уж скользкой ситуацией в данной сфере, и я решил поискать партнера на форумах, разместив рекламу на трёх электронных досках объявлений. Честно говоря поначалу даже был готов сразу закинуть 500 000 гривен в Гарант, но потом призадумался, а стоит ли? Ко мне начал обращаться народ обращается разных категорий 1. Дебильная школота, которая что-то любит повтирать про свою серьезность и просит закинуть 10 000 USD им на Вебмани в качестве аванса  2. Реальные мэны, которые  льют сразу большую сумму по SWIFT  без разговоров про гарантии и прочую шнягу, но после того, как им отдаёшь нал, они сразу пропадают, суть данных действий я так и не понял. зачем пропадать, если всё прошло гладко? 3. Мутные личност

Обзор внутренней инфраструктуры безопасности Google

Обычно компании предпочитают хранить в тайне особенности своей инфраструктуры безопасности, которая стоит на защите дата-центров, полагая, что раскрытие подобной информации может дать атакующим преимущество. Однако представители Google смотрят на этот вопрос иначе. На то есть две причины. Во-первых, публикация таких отчетов позволяет потенциальным пользователям Google Cloud Platform (GCP) оценить безопасность служб компании. Во-вторых, специалисты Google уверены в своих системах безопасности. На днях компания обнародовала документ Infrastructure Security Design Overview («Обзор модели инфраструктуры безопасности»), в котором Google достаточно подробно описала свои защитные механизмы. Инфраструктура разделена на шесть слоев, начиная от аппаратных решений (в том числе физических средств защиты), и заканчивая развертыванием служб и идентификацией пользователей. Первый слой защиты – это физические системы безопасности, которые просто не позволяют посторонним попасть в дата-центры. Эта част