Network Discovery

Use nmap to identify live hosts on network

# nmap
# nmap
# nmap    
# nmap -A          : Scan Top 1000 ports et get services versions
# nmap -sV -sC -p-  : Scan all 65535 TCP ports
# nmap -sU          : Scan UDP ports
    -sV : Attempts to determine the version of the service running on port
    -sC : Scan with default NSE scripts. Considered useful for discovery and safe
    -A  : Enables OS detection, version detection, script scanning, and traceroute
    -p- : Port scan all ports
    -sU : UDP ports (very slow)
    -oN nmap.log : output file

The three scripts can be launch in parallel in three different xterms.

Despite they can run on any port, services such as ftp, web, or ldap generally use the ports reserved for them. Port 80 for example is used by web servers for HTTP. Port 443 is the port for HTTPS.

    20: ftp data
    21: ftp control
    22: ssh
    23: telnet
    25: SMTP (mail)
    37: Time protocol
    53: Bind/DNS
    69: TFTP (Trivial FTP)
    80: HTTP
    109: POP2
    110: POP3
    111: RPC Remote Procedure Call
    137: Netbios Name Service
    138: Netbios Datagram Service
    139: Netbios Session Service
    143: IMAP (mail)
    161: SNMP
    220: IMAP
    389: LDAP
    443: HTTPS
    445: MS Active Directory, SMB
    464: Kerberos
    1521: Oracle Database
    3000: Node JS
    3306: MySQL
    69: TFTP
    161: SNMP 

The robots.txt file, when it exists, is stored at the root of a website. It contains a list of the resources of the site that are not supposed to be indexed by search engine spiders.
By convention, robots read robots.txt before indexing a website.
Its content may therefore be of interest to us.
Plus d'info :

Developers sometimes leave useful information or even passwords in code comments. These are often urls, or form fields used for testing.

Comments in the HTML or JS source code of the pagee
/* Secret code */
<!--- Secret code --->
Hidden HTML elements
<p hidden>Secret code.</p>
<label style='display: none'>Secret code.</label>

Bruteforcer a website consists in testing the presence of accessible pages, such as /register, /register.php, /admin, /upload, /users/login.txt, /admin/password.sav, ... For this there are lists of directories and filenames frequently found on web servers.

Once web server langage/framework is known (php, java, cgi / wordpress, joomla, ...), it is possible to use optimized lists, and search only the appropriate extensions.: php, php4, php5, exe, jsp, ...
It is also possible to search for files with interesting extensions. : cfg, txt, sav, jar, zip, sh, ...

Usual web brute force software :

  • dirb: Command line. To be used for a quick check, with its list 'common.txt'.
  • gobuster: Command line. To be used with the list 'directory-list-2.3-medium.txt' from dirbuster
  • dirbuster: GUI. With a Gui, but not the best choice.

It is crucial to choose the right list of directories/filenames:

  • /usr/share/wordlists/dirb/common.txt: Small well-constructed list
  • /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt: Big list. Should covers all CTFs.
  • : Once comfortable with the two previous lists, it is possible to find more optimized lists at this address.
  • On Kali and Parrot distributions, the /usr/share/wordlists directory contains links to many lists. Take the time to look at it in detail.


Dirb is usually preinstalled on Kali or Parrot. If not:

sudo apt-get install -y dirb

Run a quick scan with dirb, which its default 'common.txt' list:



Download and install in /opt

sudo apt install p7zip-full
7z x gobuster-linux-amd64.7z
sudo cp gobuster-linux-amd64/gobuster /opt/gobuster
chmod a+x /opt/gobuster

Bruteforce, with the list 'directory-list-2.3-medium.txt', and file extensions html,php,txt:

/opt/gobuster dir -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -u  -l -x html,php,txt

For an HTTPS url, add the command line option

-k : skip HTTPS ssl verification
hydra -l admin -P /usr/share/wordlists/rockyou.txt  -f http-get /monitoring
-p login 
-P password file 
-f server adress
http-get : HTTP request type
/monitoring : url path
hydra -l admin -P /usr/share/wordlists/rockyou.txt http-post-form '/admin/login.php:username=^USER^&password=^PASS^:F=Wrong password:H=Cookie\: PHPSESSIONID=ms0t93n23mc2bn2512ncv1ods4' -V

Beware if the answer is a 302 Redirect, hydra will not follow and will generate a false positive.

hydra -l admin -P /usr/share/wordlists/rockyou.txt http-get-form '/login.php:username=^USER^&password=^PASS^:F=Login failed:H=Cookie\: PHPSESSIONID=ms0t93n23mc2bn2512ncv1ods4' -V

Beware if the answer is a 302 Redirect, hydra will not follow and will generate a false positive.


URLs format:

Posts : /index.php?p=22
Login : /wp-login/
Uploaded files : /wp-content/uploads/%year%/%month%/%filename%

Config file and database credentials


Wpscan knows the structure of a wordpress site and will make brute force to identify the pages, the posts, the users, the theme, the plugins.
Wordpress flaws are mainly due to non-updated plugins.

wpscan --url -e
--url : wordpress url
-e : enum pages, posts, users, theme, plugins, ...

Login bruteforce

wpscan --url  -P rockyou.txt -U admin

You have found database credentials in config file. Let use mysql client to connect and dump the database.

mysql --host=HOST -u USER -p
--host xx : Server IP or name
-u xx     : login
-p        : manually enter the password.

List databases.

show databases; 

Ignore internal databases and choose the application database.
The database 'information_schema' contains internal information of mysql or mariadb. It can generally be ignored.
Select the aplication database, list tables, then dump interresting tables such as 'users'.

show tables;