Footprinting is the act of gathering information about a computer system and the companies it belongs to. Footprinting is the first step hackers take in their hacking process. Footprinting is important because to hack a system the hacker must first know everything there is to know about it. Below I will give you examples of the steps and services a hacker would use to get information from a website.
1. First, a hacker would start gathering information on the targets website. Things a hacker would look for are e-mails and names. This information could come in handy if the hacker was planning to attempt a social engineering attack against the company. 2. Next the hacker would get the IP address of the website. By going tohttp://www.selfseo.com/find_ip_address_of_a_website.php and inserting the web site URL, it will spit out its IP address.
3. Next the hacker would Ping the server to see if it is up and running. There’s no point in trying to hack an offline server. http://just-ping.com pings a website from 34 different locations in the world. Insert the website name or IP address and hit “Ping”. If all packets went through, then the server is up.
4. Next the hacker would do a Whois lookup on the company website. Go to http://whois.domaintools.com and put in the target website. As you can see this gives a HUGE amount of information about the company. You see the company e-mails, address, names, when the domain was created, when the domain expires, the domain name servers, and more!
5. A hacker can also take advantage of search engines to search sites for data. For example, a hacker could search a website through Google by searching “site:www.the-target-site.com” this will display every page that Google has of the website. You could narrow down the number of results by adding a specific word after. For example the hacker could search “site:www.the-target-site.com email”. This search could list several emails that are published on the website. Another search you could do in Google is “inurl:robots.txt this would look for a page called robots.txt. If a site has the file “robots.txt”, it displays all the directories and pages on the website that they wish to keep anonymous from the search engine spiders. Occasionally you might come across some valuable information that was meant to be kept private in this file.
Now that the basics of footprinting have been explained, we will move on to port scanning.