Fix support for Symantec Web Filter Categorization (BlueCoat). Add Cisco Talos reputation checks. Add single domain reputation check feature

master
Andrew Chiles 2018-04-07 17:53:58 +02:00
parent 03f0d9beed
commit 3af8d19e65
3 changed files with 128 additions and 135 deletions

View File

@ -4,21 +4,25 @@ Authors Joe Vest (@joevest) & Andrew Chiles (@andrewchiles)
Domain name selection is an important aspect of preparation for penetration tests and especially Red Team engagements. Commonly, domains that were used previously for benign purposes and were properly categorized can be purchased for only a few dollars. Such domains can allow a team to bypass reputation based web filters and network egress restrictions for phishing and C2 related tasks. Domain name selection is an important aspect of preparation for penetration tests and especially Red Team engagements. Commonly, domains that were used previously for benign purposes and were properly categorized can be purchased for only a few dollars. Such domains can allow a team to bypass reputation based web filters and network egress restrictions for phishing and C2 related tasks.
This Python based tool was written to quickly query the Expireddomains.net search engine for expired/available domains with a previous history of use. It then optionally queries for domain reputation against services like BlueCoat and IBM X-Force. The primary tool output is a timestamped HTML table style report. This Python based tool was written to quickly query the Expireddomains.net search engine for expired/available domains with a previous history of use. It then optionally queries for domain reputation against services like Symantec Web Filter (BlueCoat), IBM X-Force, and Cisco Talos. The primary tool output is a timestamped HTML table style report.
## Changes ## Changes
- June 6 2017 - 7 April 2018
+ Fixed support for Symantec Application Classification (formerly Blue Coat WebFilter)
+ Added Cisco Talos Domain Reputation check
+ Added feature to perform a reputation check against a single non-expired domain. This is useful when monitoring reputation for domains used in ongoing campaigns and engagements.
- 6 June 2017
+ Added python 3 support + Added python 3 support
+ Code cleanup and bug fixes + Code cleanup and bug fixes
+ Added Status column (Available, Make Offer, Price,Backorder,etc) + Added Status column (Available, Make Offer, Price,Backorder,etc)
## Features ## Features
- Retrieves specified number of recently expired and deleted domains (.com, .net, .org primarily) - Retrieves specified number of recently expired and deleted domains (.com, .net, .org primarily) from ExpiredDomains.net
- Retrieves available domains based on keyword search - Retrieves available domains based on keyword search from ExpiredDomains.net
- Reads line delimited input file of potential domains names to check against reputation services - Performs reputation checks against the Symantec Web Filter (BlueCoat), IBM x-Force, and Cisco Talos services
- Performs reputation checks against the Blue Coat Site Review and IBM x-Force services
- Sorts results by domain age (if known) - Sorts results by domain age (if known)
- Text-based table and HTML report output with links to reputation sources and Archive.org entry - Text-based table and HTML report output with links to reputation sources and Archive.org entry
@ -26,16 +30,17 @@ This Python based tool was written to quickly query the Expireddomains.net searc
Install Requirements Install Requirements
pip install -r requirements.txt pip3 install -r requirements.txt
or or
pip install requests texttable beautifulsoup4 lxml pip3 install requests texttable beautifulsoup4 lxml
List DomainHunter options List DomainHunter options
python ./domainhunter.py python3 domainhunter.py -h
usage: domainhunter.py [-h] [-q QUERY] [-c] [-r MAXRESULTS] [-w MAXWIDTH] usage: domainhunter.py [-h] [-q QUERY] [-c] [-r MAXRESULTS] [-s SINGLE]
[-w MAXWIDTH] [-v]
Checks expired domains, bluecoat categorization, and Archive.org history to Finds expired domains, domain categorization, and Archive.org history to
determine good candidates for C2 and phishing domains determine good candidates for C2 and phishing domains
optional arguments: optional arguments:
@ -46,22 +51,28 @@ List DomainHunter options
-r MAXRESULTS, --maxresults MAXRESULTS -r MAXRESULTS, --maxresults MAXRESULTS
Number of results to return when querying latest Number of results to return when querying latest
expired/deleted domains (min. 100) expired/deleted domains (min. 100)
-s SINGLE, --single SINGLE
Performs reputation checks against a single domain
name.
-w MAXWIDTH, --maxwidth MAXWIDTH
Width of text table
-v, --version show program's version number and exit
Use defaults to check for most recent 100 domains and check reputation Use defaults to check for most recent 100 domains and check reputation
python ./domainhunter.py python ./domainhunter.py
Search for 1000 most recently expired/deleted domains, but don't check reputation against Bluecoat or IBM xForce Search for 1000 most recently expired/deleted domains, but don't check reputation
python ./domainhunter.py -r 1000 -n python ./domainhunter.py -r 1000
Retreive reputation information from domains in an input file Perform reputation check against a single domain
python ./domainhunter.py -f <filename> python3 ./domainhunter.py -s <domain.com>
Search for available domains with search term of "dog" and max results of 100 Search for available domains with search term of "dog", max results of 100, and check reputation
./domainhunter.py -q dog -r 100 -c python3 ./domainhunter.py -q dog -r 100 -c
____ ___ __ __ _ ___ _ _ _ _ _ _ _ _ _____ _____ ____ ____ ___ __ __ _ ___ _ _ _ _ _ _ _ _ _____ _____ ____
| _ \ / _ \| \/ | / \ |_ _| \ | | | | | | | | | \ | |_ _| ____| _ \ | _ \ / _ \| \/ | / \ |_ _| \ | | | | | | | | | \ | |_ _| ____| _ \
| | | | | | | |\/| | / _ \ | || \| | | |_| | | | | \| | | | | _| | |_) | | | | | | | | |\/| | / _ \ | || \| | | |_| | | | | \| | | | | _| | |_) |
@ -76,12 +87,6 @@ Search for available domains with search term of "dog" and max results of 100
The authors or employers are not liable for any illegal act or misuse performed by any user of this tool. The authors or employers are not liable for any illegal act or misuse performed by any user of this tool.
If you plan to use this content for illegal purpose, don't. Have a nice day :) If you plan to use this content for illegal purpose, don't. Have a nice day :)
********************************************
Start Time: 20170301_113226
TextTable Column Width: 400
Checking Reputation: True
Number Domains Checked: 100
********************************************
Estimated Max Run Time: 33 minutes Estimated Max Run Time: 33 minutes
[*] Downloading malware domain list from http://mirror1.malwaredomains.com/files/justdomains [*] Downloading malware domain list from http://mirror1.malwaredomains.com/files/justdomains

View File

@ -1,8 +1,8 @@
#!/usr/bin/env python #!/usr/bin/env python
## Title: domainhunter.py ## Title: domainhunter.py
## Author: Joe Vest and Andrew Chiles ## Author: @joevest and @andrewchiles
## Description: Checks expired domains, bluecoat categorization, and Archive.org history to determine ## Description: Checks expired domains, reputation/categorization, and Archive.org history to determine
## good candidates for phishing and C2 domain names ## good candidates for phishing and C2 domain names
# To-do: # To-do:
@ -15,31 +15,32 @@ import random
import argparse import argparse
import json import json
__version__ = "20180407"
## Functions ## Functions
def checkBluecoat(domain): def checkBluecoat(domain):
try: try:
url = 'https://sitereview.bluecoat.com/rest/categorization' url = 'https://sitereview.bluecoat.com/resource/lookup'
postData = {"url":domain} # HTTP POST Parameters postData = {'url':domain,'captcha':''} # HTTP POST Parameters
headers = {'User-Agent':useragent, headers = {'User-Agent':useragent,
'X-Requested-With':'XMLHttpRequest', 'Content-Type':'application/json; charset=UTF-8',
'Referer':'https://sitereview.bluecoat.com/sitereview.jsp'} 'Referer':'https://sitereview.bluecoat.com/lookup'}
print('[*] BlueCoat Check: {}'.format(domain)) print('[*] BlueCoat Check: {}'.format(domain))
response = s.post(url,headers=headers,data=postData,verify=False) response = s.post(url,headers=headers,json=postData,verify=False)
responseJson = json.loads(response.text) responseJSON = json.loads(response.text)
if 'errorType' in responseJson: if 'errorType' in responseJSON:
a = responseJson['errorType'] a = responseJSON['errorType']
else: else:
soupA = BeautifulSoup(responseJson['categorization'], 'lxml') a = responseJSON['categorization'][0]['name']
a = soupA.find("a").text
# Print notice if CAPTCHAs are blocking accurate results # # Print notice if CAPTCHAs are blocking accurate results
if a == 'captcha': # if a == 'captcha':
print('[-] Error: Blue Coat CAPTCHA received. Change your IP or manually solve a CAPTCHA at "https://sitereview.bluecoat.com/sitereview.jsp"') # print('[-] Error: Blue Coat CAPTCHA received. Change your IP or manually solve a CAPTCHA at "https://sitereview.bluecoat.com/sitereview.jsp"')
#raw_input('[*] Press Enter to continue...') # #raw_input('[*] Press Enter to continue...')
return a return a
except: except:
@ -60,12 +61,12 @@ def checkIBMxForce(domain):
url = 'https://api.xforce.ibmcloud.com/url/{}'.format(domain) url = 'https://api.xforce.ibmcloud.com/url/{}'.format(domain)
response = s.get(url,headers=headers,verify=False) response = s.get(url,headers=headers,verify=False)
responseJson = json.loads(response.text) responseJSON = json.loads(response.text)
if 'error' in responseJson: if 'error' in responseJSON:
a = responseJson['error'] a = responseJSON['error']
else: else:
a = responseJson["result"]['cats'] a = str(responseJSON["result"]['cats'])
return a return a
@ -73,6 +74,28 @@ def checkIBMxForce(domain):
print('[-] Error retrieving IBM x-Force reputation!') print('[-] Error retrieving IBM x-Force reputation!')
return "-" return "-"
def checkTalos(domain):
try:
url = "https://www.talosintelligence.com/sb_api/query_lookup?query=%2Fapi%2Fv2%2Fdetails%2Fdomain%2F&query_entry={0}&offset=0&order=ip+asc".format(domain)
headers = {'User-Agent':useragent,
'Referer':url}
print('[*] Cisco Talos Check: {}'.format(domain))
response = s.get(url,headers=headers,verify=False)
responseJSON = json.loads(response.text)
if 'error' in responseJSON:
a = str(responseJSON['error'])
else:
a = '{0} (Score: {1})'.format(str(responseJSON['category']['description']), str(responseJSON['web_score_name']))
return a
except:
print('[-] Error retrieving Talos reputation!')
return "-"
def downloadMalwareDomains(): def downloadMalwareDomains():
url = malwaredomains url = malwaredomains
response = s.get(url,headers=headers,verify=False) response = s.get(url,headers=headers,verify=False)
@ -96,33 +119,30 @@ if __name__ == "__main__":
print("[*] Install required dependencies by running `pip install -r requirements.txt`") print("[*] Install required dependencies by running `pip install -r requirements.txt`")
quit(0) quit(0)
parser = argparse.ArgumentParser(description='Checks expired domains, bluecoat categorization, and Archive.org history to determine good candidates for C2 and phishing domains') parser = argparse.ArgumentParser(description='Finds expired domains, domain categorization, and Archive.org history to determine good candidates for C2 and phishing domains')
parser.add_argument('-q','--query', help='Optional keyword used to refine search results', required=False, type=str) parser.add_argument('-q','--query', help='Optional keyword used to refine search results', required=False, default=False, type=str, dest='query')
parser.add_argument('-c','--check', help='Perform slow reputation checks', required=False, default=False, action='store_true') parser.add_argument('-c','--check', help='Perform slow reputation checks', required=False, default=False, action='store_true', dest='check')
parser.add_argument('-r','--maxresults', help='Number of results to return when querying latest expired/deleted domains (min. 100)', required=False, type=int, default=100) parser.add_argument('-r','--maxresults', help='Number of results to return when querying latest expired/deleted domains (min. 100)', required=False, default=100, type=int, dest='maxresults')
parser.add_argument('-w','--maxwidth', help='Width of text table', required=False, type=int, default=400) parser.add_argument('-s','--single', help='Performs reputation checks against a single domain name.', required=False, default=False, dest='single')
#parser.add_argument('-f','--file', help='Input file containing potential domain names to check (1 per line)', required=False, type=str) parser.add_argument('-w','--maxwidth', help='Width of text table', required=False, default=400, type=int, dest='maxwidth')
parser.add_argument('-v','--version', action='version',version='%(prog)s {version}'.format(version=__version__))
args = parser.parse_args() args = parser.parse_args()
## Variables ## Variables
query = False
if args.query:
query = args.query query = args.query
check = args.check check = args.check
maxresults = args.maxresults maxresults = args.maxresults
if maxresults < 100: if maxresults < 100:
maxresults = 100 maxresults = 100
single = args.single
maxwidth = args.maxwidth maxwidth = args.maxwidth
# TODO: Add Input file support
#inputfile = False
#if args.file:
# inputfile = args.file
t = Texttable(max_width=maxwidth)
malwaredomains = 'http://mirror1.malwaredomains.com/files/justdomains' malwaredomains = 'http://mirror1.malwaredomains.com/files/justdomains'
expireddomainsqueryurl = 'https://www.expireddomains.net/domain-name-search' expireddomainsqueryurl = 'https://www.expireddomains.net/domain-name-search'
@ -148,21 +168,35 @@ if __name__ == "__main__":
print(title) print(title)
print("") print("")
print("Expired Domains Reputation Checker") print("Expired Domains Reputation Checker")
print("") print("Authors: @joevest and @andrewchiles\n")
print("DISCLAIMER:") print("DISCLAIMER: This is for educational purposes only!")
print("This is for educational purposes only!")
disclaimer = '''It is designed to promote education and the improvement of computer/cyber security. disclaimer = '''It is designed to promote education and the improvement of computer/cyber security.
The authors or employers are not liable for any illegal act or misuse performed by any user of this tool. The authors or employers are not liable for any illegal act or misuse performed by any user of this tool.
If you plan to use this content for illegal purpose, don't. Have a nice day :)''' If you plan to use this content for illegal purpose, don't. Have a nice day :)'''
print(disclaimer) print(disclaimer)
print("") print("")
print("********************************************")
print("Start Time: {}".format(timestamp))
print("TextTable Column Width: {}".format(str(maxwidth)))
print("Checking Reputation: {}".format(str(check)))
print("Number Domains Checked: {}".format(maxresults))
print("********************************************")
# Retrieve reputation for a single choosen domain (Quick Mode)
if single:
domain = single
print('[*] Fetching domain reputation for: {}'.format(domain))
bluecoat = ''
ibmxforce = ''
ciscotalos = ''
bluecoat = checkBluecoat(domain)
print("[+] {}: {}".format(domain, bluecoat))
ibmxforce = checkIBMxForce(domain)
print("[+] {}: {}".format(domain, ibmxforce))
ciscotalos = checkTalos(domain)
print("[+] {}: {}".format(domain, ciscotalos))
quit()
# Calculate estimated runtime based on sleep variable
runtime = 0 runtime = 0
if check: if check:
runtime = (maxresults * 20) / 60 runtime = (maxresults * 20) / 60
@ -170,14 +204,13 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
else: else:
runtime = maxresults * .15 / 60 runtime = maxresults * .15 / 60
print("Estimated Max Run Time: {} minutes".format(int(runtime))) print("Estimated Max Run Time: {} minutes\n".format(int(runtime)))
print("")
# Download known malware domains
print('[*] Downloading malware domain list from {}'.format(malwaredomains)) print('[*] Downloading malware domain list from {}'.format(malwaredomains))
maldomains = downloadMalwareDomains() maldomains = downloadMalwareDomains()
maldomains_list = maldomains.split("\n") maldomains_list = maldomains.split("\n")
# Create an initial session
# Generic Proxy support # Generic Proxy support
# TODO: add as a parameter # TODO: add as a parameter
@ -186,7 +219,7 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
'https': 'http://127.0.0.1:8080', 'https': 'http://127.0.0.1:8080',
} }
# Create an initial session
domainrequest = s.get("https://www.expireddomains.net",headers=headers,verify=False) domainrequest = s.get("https://www.expireddomains.net",headers=headers,verify=False)
#domainrequest = s.get("https://www.expireddomains.net",headers=headers,verify=False,proxies=proxies) #domainrequest = s.get("https://www.expireddomains.net",headers=headers,verify=False,proxies=proxies)
@ -204,6 +237,8 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
else: else:
urls.append("{}/?start={}&q={}".format(expireddomainsqueryurl,i,query)) urls.append("{}/?start={}&q={}".format(expireddomainsqueryurl,i,query))
headers['Referer'] ='https://www.expireddomains.net/domain-name-search/?start={}&q={}'.format((i-25),query) headers['Referer'] ='https://www.expireddomains.net/domain-name-search/?start={}&q={}'.format((i-25),query)
# If no keyword provided, retrieve list of recently expired domains
else: else:
print('[*] Fetching expired or deleted domains...') print('[*] Fetching expired or deleted domains...')
@ -213,7 +248,6 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
urls.append('https://www.expireddomains.net/deleted-net-domains/?start={}&o=changed&r=a'.format(i)) urls.append('https://www.expireddomains.net/deleted-net-domains/?start={}&o=changed&r=a'.format(i))
urls.append('https://www.expireddomains.net/deleted-org-domains/?start={}&o=changed&r=a'.format(i)) urls.append('https://www.expireddomains.net/deleted-org-domains/?start={}&o=changed&r=a'.format(i))
for url in urls: for url in urls:
print("[*] {}".format(url)) print("[*] {}".format(url))
@ -230,7 +264,6 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
pk_str = '5abbbc772cbacfb1' + '.1496' + str(r1) + '.2.1496' + str(r1) + '.1496' + str(r1) pk_str = '5abbbc772cbacfb1' + '.1496' + str(r1) + '.2.1496' + str(r1) + '.1496' + str(r1)
jar = requests.cookies.RequestsCookieJar() jar = requests.cookies.RequestsCookieJar()
#jar.set('_pk_id.10.dd0a', '843f8d071e27aa52.1496597944.2.1496602069.1496601572.', domain='expireddomains.net', path='/')
jar.set('_pk_ses.10.dd0a', '*', domain='expireddomains.net', path='/') jar.set('_pk_ses.10.dd0a', '*', domain='expireddomains.net', path='/')
jar.set('_pk_id.10.dd0a', pk_str, domain='expireddomains.net', path='/') jar.set('_pk_id.10.dd0a', pk_str, domain='expireddomains.net', path='/')
@ -243,7 +276,6 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
soup = BeautifulSoup(domains, 'lxml') soup = BeautifulSoup(domains, 'lxml')
table = soup.find("table") table = soup.find("table")
try: try:
for row in table.findAll('tr')[1:]: for row in table.findAll('tr')[1:]:
@ -252,8 +284,6 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
cells = row.findAll("td") cells = row.findAll("td")
if len(cells) >= 1: if len(cells) >= 1:
output = "" output = ""
@ -328,9 +358,9 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
ibmxforce = 'ignored' ibmxforce = 'ignored'
elif check == True: elif check == True:
bluecoat = checkBluecoat(c0) bluecoat = checkBluecoat(c0)
print("[+] {} is categorized as: {}".format(c0, bluecoat)) print("[+] {}: {}".format(c0, bluecoat))
ibmxforce = checkIBMxForce(c0) ibmxforce = checkIBMxForce(c0)
print("[+] {} is categorized as: {}".format(c0, ibmxforce)) print("[+] {}: {}".format(c0, ibmxforce))
# Sleep to avoid captchas # Sleep to avoid captchas
time.sleep(random.randrange(10,20)) time.sleep(random.randrange(10,20))
else: else:
@ -338,44 +368,12 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
ibmxforce = "skipped" ibmxforce = "skipped"
# Append parsed domain data to list # Append parsed domain data to list
data.append([c0,c3,c4,available,status,bluecoat,ibmxforce]) data.append([c0,c3,c4,available,status,bluecoat,ibmxforce])
except Exception as e: print(e) except Exception as e:
#print("[-] Error: No results found on this page!") print(e)
# TODO: Add support of input file
# Retrieve the most recent expired/deleted domain results
# elif inputfile:
# print('[*] Fetching domain reputation from file: {}').format(inputfile)
# # read in file contents to list
# try:
# domains = [line.rstrip('\r\n') for line in open(inputfile, "r")]
# except IOError:
# print '[-] Error: "{}" does not appear to exist.'.format(inputfile)
# exit()
# print('[*] Domains loaded: {}').format(len(domains))
# for domain in domains:
# if domain in maldomains_list:
# print("[-] Skipping {} - Identified as known malware domain").format(domain)
# else:
# bluecoat = ''
# ibmxforce = ''
# bluecoat = checkBluecoat(domain)
# print "[+] {} is categorized as: {}".format(domain, bluecoat)
# ibmxforce = checkIBMxForce(domain)
# print "[+] {} is categorized as: {}".format(domain, ibmxforce)
# # Sleep to avoid captchas
# time.sleep(random.randrange(10,20))
# data.append([domain,'-','-','-',bluecoat,ibmxforce])
# Sort domain list by column 2 (Birth Year) # Sort domain list by column 2 (Birth Year)
sortedData = sorted(data, key=lambda x: x[1], reverse=True) sortedData = sorted(data, key=lambda x: x[1], reverse=True)
t.add_rows(sortedData)
header = ['Domain', 'Birth', '#', 'TLDs', 'Status', 'BC', 'IBM']
t.header(header)
# Build HTML Table # Build HTML Table
html = '' html = ''
htmlHeader = '<html><head><title>Expired Domain List</title></head>' htmlHeader = '<html><head><title>Expired Domain List</title></head>'
@ -388,7 +386,7 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
<th>Entries</th> <th>Entries</th>
<th>TLDs Available</th> <th>TLDs Available</th>
<th>Status</th> <th>Status</th>
<th>Bluecoat</th> <th>Symantec</th>
<th>Categorization</th> <th>Categorization</th>
<th>IBM-xForce</th> <th>IBM-xForce</th>
<th>Categorization</th> <th>Categorization</th>
@ -410,7 +408,7 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
htmlTableBody += '<td>{}</td>'.format(i[3]) # TLDs htmlTableBody += '<td>{}</td>'.format(i[3]) # TLDs
htmlTableBody += '<td>{}</td>'.format(i[4]) # Status htmlTableBody += '<td>{}</td>'.format(i[4]) # Status
htmlTableBody += '<td><a href="https://sitereview.bluecoat.com/sitereview.jsp#/?search={}" target="_blank">Bluecoat</a></td>'.format(i[0]) # Bluecoat htmlTableBody += '<td><a href="https://sitereview.bluecoat.com/sitereview#/?search={}" target="_blank">Bluecoat</a></td>'.format(i[0]) # Bluecoat
htmlTableBody += '<td>{}</td>'.format(i[5]) # Bluecoat Categorization htmlTableBody += '<td>{}</td>'.format(i[5]) # Bluecoat Categorization
htmlTableBody += '<td><a href="https://exchange.xforce.ibmcloud.com/url/{}" target="_blank">IBM-xForce</a></td>'.format(i[0]) # IBM xForce htmlTableBody += '<td><a href="https://exchange.xforce.ibmcloud.com/url/{}" target="_blank">IBM-xForce</a></td>'.format(i[0]) # IBM xForce
htmlTableBody += '<td>{}</td>'.format(i[6]) # IBM x-Force Categorization htmlTableBody += '<td>{}</td>'.format(i[6]) # IBM x-Force Categorization
@ -429,6 +427,11 @@ If you plan to use this content for illegal purpose, don't. Have a nice day :)'
print("\n[*] Search complete") print("\n[*] Search complete")
print("[*] Log written to {}\n".format(logfilename)) print("[*] Log written to {}\n".format(logfilename))
# Print Text Table
t = Texttable(max_width=maxwidth)
t.add_rows(sortedData)
header = ['Domain', 'Birth', '#', 'TLDs', 'Status', 'Symantec', 'IBM']
t.header(header)
print(t.draw()) print(t.draw())

File diff suppressed because one or more lines are too long