Add h00die's swaggy recommendations
parent
df201e65b4
commit
cf0b3c9476
|
@ -1,16 +1,18 @@
|
|||
## Description
|
||||
|
||||
This module is a http crawler, it will browse the links recursively from the web site. If you have loaded a database plugin and connected to a database this module will report web pages and web forms.
|
||||
This module is a http crawler, it will browse the links recursively from the
|
||||
web site. If you have loaded a database plugin and connected to a database,
|
||||
this module will report web pages and web forms.
|
||||
|
||||
## Vulnerable Application
|
||||
|
||||
You can use any web application to test the crawler
|
||||
You can use any web application to test the crawler.
|
||||
|
||||
## Options
|
||||
|
||||
**URI**
|
||||
|
||||
Default path is /
|
||||
Default path is `/`
|
||||
|
||||
**DirBust**
|
||||
|
||||
|
@ -22,7 +24,7 @@ You can use any web application to test the crawler
|
|||
|
||||
**UserAgent**
|
||||
|
||||
Default User Agent is `Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)`
|
||||
Default User Agent is `Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)`
|
||||
|
||||
## Verification Steps
|
||||
|
||||
|
@ -32,7 +34,7 @@ You can use any web application to test the crawler
|
|||
4. Do: ```set URI [PATH]```
|
||||
4. Do: ```run```
|
||||
|
||||
## Sample Output
|
||||
### Example against [WebGoat](https://github.com/WebGoat/WebGoat)
|
||||
```
|
||||
msf> use auxiliary/scanner/http/crawler
|
||||
msf auxiliary(crawler) > set RHOST 127.0.0.1
|
||||
|
@ -60,11 +62,11 @@ msf auxiliary(crawler) > run
|
|||
[*] Auxiliary module execution completed
|
||||
```
|
||||
|
||||
## Useful Output
|
||||
## Follow-on: Wmap
|
||||
|
||||
As you see, the result is not very user friendly...
|
||||
|
||||
But you can view a tree of your website with the wmap plugin. Simply run :
|
||||
But you can view a tree of your website with the Wmap plugin. Simply run :
|
||||
|
||||
```
|
||||
msf auxiliary(crawler) > load wmap
|
||||
|
|
|
@ -1,25 +1,29 @@
|
|||
## Description
|
||||
|
||||
This module will detect robots.txt files on web servers and analize its content.
|
||||
This type of file can reveal interesting information about areas of the site that are not indexed.
|
||||
This module will detect `robots.txt` files on web servers and analize its content.
|
||||
The `robots.txt` file is a file which is supposed to be honored by web crawlers
|
||||
and bots, as locations which are not to be indexed or specifically called out
|
||||
to be indexed. This can be abused to reveal interesting information about areas
|
||||
of the site which an admin may not want to be public knowledge.
|
||||
|
||||
## Vulnerable Application
|
||||
|
||||
This scanner work perfectly with one of the [Google](https://www.google.com/) servers.
|
||||
You can easily get one of the IP addresses with a `ping google.com` command.
|
||||
You can use almost any web application to test this module, as `robots.txt`
|
||||
is extremely common.
|
||||
|
||||
## Verification Steps
|
||||
|
||||
1. Do: `use auxiliary/scanner/http/robots_txt`
|
||||
2. Do: `set rhosts <ip>`
|
||||
3. Do: `run`
|
||||
4. You should get the robots.txt file content
|
||||
4. You should get the `robots.txt` file content
|
||||
|
||||
## Options
|
||||
|
||||
**PATH**
|
||||
|
||||
You can set the test path where the scanner will try to find robots.txt file. Default is /
|
||||
You can set the test path where the scanner will try to find `robots.txt` file.
|
||||
Default is `/`
|
||||
|
||||
## Sample Output
|
||||
```
|
||||
|
@ -35,17 +39,12 @@ Disallow: /sdch
|
|||
Disallow: /groups
|
||||
Disallow: /index.html?
|
||||
Disallow: /?
|
||||
|
||||
[...TL;DR...]
|
||||
|
||||
```
|
||||
[...Truncated...]
|
||||
```
|
||||
User-agent: facebookexternalhit
|
||||
Allow: /imgres
|
||||
|
||||
Sitemap: http://www.gstatic.com/culturalinstitute/sitemaps/www_google_com_culturalinstitute/sitemap-index.xml
|
||||
Sitemap: http://www.gstatic.com/earth/gallery/sitemaps/sitemap.xml
|
||||
Sitemap: http://www.gstatic.com/s2/sitemaps/profiles-sitemap.xml
|
||||
Sitemap: https://www.google.com/sitemap.xml
|
||||
|
||||
[*] Scanned 1 of 1 hosts (100% complete)
|
||||
[*] Auxiliary module execution completed
|
||||
```
|
||||
|
|
Loading…
Reference in New Issue