信息搜集篇

简介

浸透的实质是信息搜集

信息搜集也叫做财物搜集

信息搜集是浸透测验的前期首要工作,是十分重要的环节,搜集足够多的信息才干便利接下来的测验,信息搜集首要是搜集网站的域名信息、子域名信息、方针网站信息、方针网站实在IP、灵敏/目录文件、敞开端口和中间件信息等等。经过各种渠道和手法尽或许搜集到多的关于这个站点的信息,有助于咱们更多的去找到浸透点,突破口。

信息搜集的分类

  1. 服务器的相关信息(实在ip,体系类型,版别,敞开端口,WAF等)

  2. 网站指纹辨认(包含,cms,cdn,证书等) dns记载

  3. whois信息,姓名,存案,邮箱,电话反查(邮箱丢社工库,社工预备等)

  4. 子域名搜集,旁站,C段等

  5. google hacking针对化查找,word/电子表格/pdf文件,中间件版别,弱口令扫描等

  6. 扫描网站目录结构,爆后台,网站banner,测验文件,备份等灵敏文件泄漏等

  7. 传输协议,通用缝隙,exp,github源码等

常见的办法有

  1. whois查询

    域名在注册的时分 需求填入个人或许企业信息
    假如没有设置躲藏属性能够查询出来 经过存案号 查询个人或许企业信息
    也能够whois反查注册人 邮箱 电话 组织 反查更多得域名和需求得信息。

  2. 搜集子域名

    域名分为根域名和子域名

    moonsec.com 根域名 顶级域名

    www.moonsec.com子域名 也叫二级域名

    www.wiki.moonsec.com 子域名 也叫三级域名 四级如此类推

  3. 端口扫描

    服务器需求敞开服务,就必须开启端口,常见的端口是tcp 和udp两种类型

    规模 0-65535 经过扫得到的端口,拜访服务 规划下一步浸透。

  4. 查找实在ip

    企业的网站,为了进步拜访速度,或许防止黑客进犯,用了cdn服务,用了cdn之后实在服务器ip会被躲藏。

  5. 勘探旁站及C段

    旁站:一个服务器上有多个网站 经过ip查询服务器上的网站

    c段:查找同一个段服务器上的网站。能够找到相同网站的类型和服务器,也能够获取同段服务器进行下一步浸透。

  6. 网络空间查找引擎

    经过这些引擎查找网站或许服务器的信息,进行下一步浸透。

  7. 扫描灵敏目录/文件

    经过扫描目录和文件,大致了解网站的结构,获取突破点,比方后台,文件备份,上传点。

  8. 指纹辨认

    获取网站的版别,归于那些cms办理体系,查找缝隙exp,下载cms进行代码审计。

    [外链图片转存失利,源站或许有防盗链机制,主张将图片保存下来直接上传(img-9HAGOPDQ-1669458115177)(xiaox1ao.github.io/images/信息搜集…

    史上最全的信息搜集总结!!.md

在线whois查询

经过whois来对域名信息进行查询,能够查到注册商、注册人、邮箱、DNS解析服务器、注册人联系电话等,由于有些网站信息查得到,有些网站信息查不到,所以引荐以下信息比较全的查询网站,直接输入方针站点即可查询到相关信息。

站长之家域名WHOIS信息查询地址 whois.chinaz.com/

爱站网域名WHOIS信息查询地址 whois.aizhan.com/

腾讯云域名WHOIS信息查询地址 whois.cloud.tencent.com/

美橙互联域名WHOIS信息查询地址 whois.cndns.com/

爱名网域名WHOIS信息查询地址 www.22.cn/domain/

易名网域名WHOIS信息查询地址 whois.ename.net/

中国万网域名WHOIS信息查询地址 whois.aliyun.com/

西部数码域名WHOIS信息查询地址 whois.west.cn/

新网域名WHOIS信息查询地址 whois.xinnet.com/domain/whoi…

纳网域名WHOIS信息查询地址 whois.nawang.cn/

中资源域名WHOIS信息查询地址 www.zzy.cn/domain/whoi…

三五互联域名WHOIS信息查询地址 cp.35.com/chinese/who…

新网互联域名WHOIS信息查询地址www.dns.com.cn/show/domain…

国外WHOIS信息查询地址 who.is/

在线网站存案查询

网站存案信息是依据国家法律法规规定,由网站一切者向国家有关部门恳求的存案,假如需求查询企业存案信息(单位名称、存案编号、网站负责人、电子邮箱、联系电话、法人等),引荐以下网站查询

  1. 天眼查www.tianyancha.com/

  2. ICP存案查询网www.beianbeian.com/

  3. 爱站存案查询icp.aizhan.com/

  4. 域名帮手存案信息查询cha.fute.com/index

查询绿盟的whois信息

nsfocus.com.cn

Whois查询nsfocus.com.cn

史上最全的信息搜集总结!!.md

经过反查注册人和邮箱得多更多得域名

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md
{width=“5.75625in” height=“2.192361111111111in”}

史上最全的信息搜集总结!!.md

搜集子域名

子域名效果

搜集子域名能够扩展测验规模,同一域名下的二级域名都归于方针规模。

常用办法

子域名中的常见财物类型一般包含办公体系,邮箱体系,论坛,商城,其他办理体系,网站办理后台也有或许出现子域名中。

首要找到方针站点,在官网中或许会找到相关财物(多为办公体系,邮箱体系等),重视一下页面底部,或许有办理后台等收成。

查找方针域名信息的办法有:

  1. FOFA title=“公司名称”

  2. 百度 intitle=公司名称

  3. Google intitle=公司名称

  4. 站长之家,直接查找名称或许网站域名即可检查相关信息:tool.chinaz.com/

  5. 钟馗之眼 site=域名即可www.zoomeye.org/

找到官网后,再搜集子域名,下面引荐几种子域名搜集的办法,直接输入domain即可查询

域名的类型

A记载、别号记载(CNAME)、MX记载、TXT记载、NS记载:

A (Address) 记载:

是用来指定主机名(或域名)对应的IP地址记载。用户能够将该域名下的网站服务器指向到自己的web server上。一起也能够设置您域名的二级域名。

别号(CNAME)记载:

也被称为规范姓名。这种记载允许您将多个姓名映射到同一台计算机。一般用于一起供给WWW和MAIL服务的计算机。例如,有一台计算机名为”host.mydomain.com”(A记载)。它一起供给WWW和MAIL服务,为了便于用户拜访服务。能够为该计算机设置两个别号(CNAME):WWW和MAIL。这两个别号的全称便是”www.mydomain.com”和”mail.mydomain.com”。实际上他们都指向”host.mydomain.com”。相同的办法能够用于当您具有多个域名需求指向同一服务器IP,此刻您就能够将一个域名做A记载指向服务器IP然后将其他的域名做别号到之前做A记载的域名上,那么当您的服务器IP地址改变时您就能够不用麻烦的一个一个域名更改指向了,只需求更改做A记载的那个域名其他做别号的那些域名的指向也将主动更改到新的IP地址上了。

如何检测CNAME记载?

1、进入指令状态;(开端菜单 – 运转 – CMD[回车]);

2、输入指令” nslookup -q=cname 这儿填写对应的域名或二级域名”,检查回来的成果与设置的是否共同即可。

例如:nslookup -qt=CNAME www.baidu.com

MX(Mail Exchanger)记载:

是邮件交换记载,它指向一个邮件服务器,用于电子邮件体系发邮件时依据收信人的地址后缀来定位邮件服务器。例如,当Internet上的某用户要发一封信给user@mydomain.com时,该用户的邮件体系经过DNS查找mydomain.com这个域名的MX记载,假如MX记载存在,用户计算机就将邮件发送到MX记载所指定的邮件服务器上。

什么是TXT记载?:

TXT记载一般指为某个主机名或域名设置的阐明,如:

1)admin IN TXT “jack, mobile:13800138000”;

2)mail IN TXT “邮件主机, 存放在xxx ,办理人:AAA”,Jim IN TXT”contact: abc@mailserver.com“也便是您能够设置 TXT ,以便使他人联系到您。

如何检测TXT记载?

1、进入指令状态;(开端菜单 – 运转 – CMD[回车]);

2、输入指令” nslookup -q=txt
这儿填写对应的域名或二级域名”,检查回来的成果与设置的是否共同即可。

什么是NS记载?

ns记载全称为Name Server
是一种域名服务器记载,用来清晰当时你的域名是由哪个DNS服务器来进行解析的。

子域名查询

子域名在线查询1

phpinfo.me/domain/

史上最全的信息搜集总结!!.md

子域名在线查询2

www.t1h2ua.cn/tools/

史上最全的信息搜集总结!!.md

dns侦测

dnsdumpster.com/

史上最全的信息搜集总结!!.md

IP138查询子域名

site.ip138.com/moonsec.com…

史上最全的信息搜集总结!!.md

FOFA查找子域名

fofa.info/

语法:domain=“bfreebuf.com”

提示:以上两种办法无需爆破,查询速度快,需求快速搜集财物时能够优先运用,后面再用其他办法补充。

史上最全的信息搜集总结!!.md

Hackertarget查询子域名

hackertarget.com/find-dns-ho…

留意:经过该办法查询子域名能够得到一个方针大约的ip段,接下来能够经过ip来搜集信息。

史上最全的信息搜集总结!!.md

360测绘空间

quake.360.cn/

domain:“*.freebuf.com”

Layer子域名挖掘机

史上最全的信息搜集总结!!.md

SubDomainBrute

pip install aiodns

史上最全的信息搜集总结!!.md

运转指令

subDomainsBrute.py baidu.com
subDomainsBrute.py baidu.com --full -o baidu 2.txt

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

Sublist3r

或许太老了,运转有问题,一向没处理,不用这个东西了

如同把sublist3r.py改成下面的就行了

#!/usr/bin/env python
# coding: utf-8
# Sublist3r v1.0
# By Ahmed Aboul-Ela - twitter.com/aboul3la
# modules in standard library
import re
import sys
import os
import argparse
import time
import hashlib
import random
import multiprocessing
import threading
import socket
import json
from collections import Counter
# external modules
from subbrute import subbrute
import dns.resolver
import requests
# Python 2.x and 3.x compatiablity
if sys.version > '3':
    import urllib.parse as urlparse
    import urllib.parse as urllib
else:
    from urllib.parse import urlparse
    import urllib
# In case you cannot install some of the required development packages
# there's also an option to disable the SSL warning:
try:
    #import requests.packages.urllib3
    requests.packages.urllib3.disable_warnings()
except:
    pass
# Check if we are running this on windows platform
is_windows = sys.platform.startswith('win')
# Console Colors
if is_windows:
    # Windows deserves coloring too :D
    G = '33[92m'  # green
    Y = '33[93m'  # yellow
    B = '33[94m'  # blue
    R = '33[91m'  # red
    W = '33[0m'   # white
    try:
        import win_unicode_console , colorama
        win_unicode_console.enable()
        colorama.init()
        #Now the unicode will work ^_^
    except:
        print("[!] Error: Coloring libraries not installed, no coloring will be used [Check the readme]")
        G = Y = B = R = W = G = Y = B = R = W = ''
else:
    G = '33[92m'  # green
    Y = '33[93m'  # yellow
    B = '33[94m'  # blue
    R = '33[91m'  # red
    W = '33[0m'   # white
def no_color():
    global G, Y, B, R, W
    G = Y = B = R = W = ''
def banner():
    print("""%s
                 ____        _     _ _     _   _____
                / ___| _   _| |__ | (_)___| |_|___ / _ __
                ___ | | | | '_ | | / __| __| |_ | '__|
                 ___) | |_| | |_) | | __  |_ ___) | |
                |____/ __,_|_.__/|_|_|___/__|____/|_|%s%s
                # Coded By Ahmed Aboul-Ela - @aboul3la
    """ % (R, W, Y))
def parser_error(errmsg):
    banner()
    print("Usage: python "   sys.argv[0]   " [Options] use -h for help")
    print(R   "Error: "   errmsg   W)
    sys.exit()
def parse_args():
    # parse the arguments
    parser = argparse.ArgumentParser(epilog='tExample: rnpython '   sys.argv[0]   " -d google.com")
    parser.error = parser_error
    parser._optionals.title = "OPTIONS"
    parser.add_argument('-d', '--domain', help="Domain name to enumerate it's subdomains", required=True)
    parser.add_argument('-b', '--bruteforce', help='Enable the subbrute bruteforce module', nargs='?', default=False)
    parser.add_argument('-p', '--ports', help='Scan the found subdomains against specified tcp ports')
    parser.add_argument('-v', '--verbose', help='Enable Verbosity and display results in realtime', nargs='?', default=False)
    parser.add_argument('-t', '--threads', help='Number of threads to use for subbrute bruteforce', type=int, default=30)
    parser.add_argument('-e', '--engines', help='Specify a comma-separated list of search engines')
    parser.add_argument('-o', '--output', help='Save the results to text file')
    parser.add_argument('-n', '--no-color', help='Output without color', default=False, action='store_true')
    return parser.parse_args()
def write_file(filename, subdomains):
    # saving subdomains results to output file
    print("%s[-] Saving results to file: %s%s%s%s" % (Y, W, R, filename, W))
    with open(str(filename), 'wt') as f:
        for subdomain in subdomains:
            f.write(subdomain   os.linesep)
def subdomain_sorting_key(hostname):
    """Sorting key for subdomains
    This sorting key orders subdomains from the top-level domain at the right
    reading left, then moving '^' and 'www' to the top of their group. For
    example, the following list is sorted correctly:
    [
        'example.com',
        'www.example.com',
        'a.example.com',
        'www.a.example.com',
        'b.a.example.com',
        'b.example.com',
        'example.net',
        'www.example.net',
        'a.example.net',
    ]
    """
    parts = hostname.split('.')[::-1]
    if parts[-1] == 'www':
        return parts[:-1], 1
    return parts, 0
class enumratorBase(object):
    def __init__(self, base_url, engine_name, domain, subdomains=None, silent=False, verbose=True):
        subdomains = subdomains or []
        self.domain = urlparse.urlparse(domain).netloc
        self.session = requests.Session()
        self.subdomains = []
        self.timeout = 25
        self.base_url = base_url
        self.engine_name = engine_name
        self.silent = silent
        self.verbose = verbose
        self.headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
            'Accept': 'text/html,application/xhtml xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
            'Accept-Language': 'en-US,en;q=0.8',
            'Accept-Encoding': 'gzip',
        }
        self.print_banner()
    def print_(self, text):
        if not self.silent:
            print(text)
        return
    def print_banner(self):
        """ subclass can override this if they want a fancy banner :)"""
        self.print_(G   "[-] Searching now in %s.." % (self.engine_name)   W)
        return
    def send_req(self, query, page_no=1):
        url = self.base_url.format(query=query, page_no=page_no)
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout)
        except Exception:
            resp = None
        return self.get_response(resp)
    def get_response(self, response):
        if response is None:
            return 0
        return response.text if hasattr(response, "text") else response.content
    def check_max_subdomains(self, count):
        if self.MAX_DOMAINS == 0:
            return False
        return count >= self.MAX_DOMAINS
    def check_max_pages(self, num):
        if self.MAX_PAGES == 0:
            return False
        return num >= self.MAX_PAGES
    # override
    def extract_domains(self, resp):
        """ chlid class should override this function """
        return
    # override
    def check_response_errors(self, resp):
        """ chlid class should override this function
        The function should return True if there are no errors and False otherwise
        """
        return True
    def should_sleep(self):
        """Some enumrators require sleeping to avoid bot detections like Google enumerator"""
        return
    def generate_query(self):
        """ chlid class should override this function """
        return
    def get_page(self, num):
        """ chlid class that user different pagnation counter should override this function """
        return num   10
    def enumerate(self, altquery=False):
        flag = True
        page_no = 0
        prev_links = []
        retries = 0
        while flag:
            query = self.generate_query()
            count = query.count(self.domain)  # finding the number of subdomains found so far
            # if they we reached the maximum number of subdomains in search query
            # then we should go over the pages
            if self.check_max_subdomains(count):
                page_no = self.get_page(page_no)
            if self.check_max_pages(page_no):  # maximum pages for Google to avoid getting blocked
                return self.subdomains
            resp = self.send_req(query, page_no)
            # check if there is any error occured
            if not self.check_response_errors(resp):
                return self.subdomains
            links = self.extract_domains(resp)
            # if the previous page hyperlinks was the similar to the current one, then maybe we have reached the last page
            if links == prev_links:
                retries  = 1
                page_no = self.get_page(page_no)
        # make another retry maybe it isn't the last page
                if retries >= 3:
                    return self.subdomains
            prev_links = links
            self.should_sleep()
        return self.subdomains
class enumratorBaseThreaded(multiprocessing.Process, enumratorBase):
    def __init__(self, base_url, engine_name, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        enumratorBase.__init__(self, base_url, engine_name, domain, subdomains, silent=silent, verbose=verbose)
        multiprocessing.Process.__init__(self)
        self.q = q
        return
    def run(self):
        domain_list = self.enumerate()
        for domain in domain_list:
            self.q.append(domain)
class GoogleEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = "https://google.com/search?q={query}&btnG=Search&hl=en-US&biw=&bih=&gbv=1&start={page_no}&filter=0"
        self.engine_name = "Google"
        self.MAX_DOMAINS = 11
        self.MAX_PAGES = 200
        super(GoogleEnum, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.q = q
        return
    def extract_domains(self, resp):
        links_list = list()
        link_regx = re.compile('<cite.*?>(.*?)</cite>')
        try:
            links_list = link_regx.findall(resp)
            for link in links_list:
                link = re.sub('<span.*>', '', link)
                if not link.startswith('http'):
                    link = "http://"   link
                subdomain = urlparse.urlparse(link).netloc
                if subdomain and subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        return links_list
    def check_response_errors(self, resp):
        if (type(resp) is str or type(resp) is str) and 'Our systems have detected unusual traffic' in resp:
            self.print_(R   "[!] Error: Google probably now is blocking our requests"   W)
            self.print_(R   "[~] Finished now the Google Enumeration ..."   W)
            return False
        return True
    def should_sleep(self):
        time.sleep(5)
        return
    def generate_query(self):
        if self.subdomains:
            fmt = 'site:{domain} -www.{domain} -{found}'
            found = ' -'.join(self.subdomains[:self.MAX_DOMAINS - 2])
            query = fmt.format(domain=self.domain, found=found)
        else:
            query = "site:{domain} -www.{domain}".format(domain=self.domain)
        return query
class YahooEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = "https://search.yahoo.com/search?p={query}&b={page_no}"
        self.engine_name = "Yahoo"
        self.MAX_DOMAINS = 10
        self.MAX_PAGES = 0
        super(YahooEnum, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.q = q
        return
    def extract_domains(self, resp):
        link_regx2 = re.compile('<span class=" fz-.*? fw-m fc-12th wr-bw.*?">(.*?)</span>')
        link_regx = re.compile('<span class="txt"><span class=" cite fw-xl fz-15px">(.*?)</span>')
        links_list = []
        try:
            links = link_regx.findall(resp)
            links2 = link_regx2.findall(resp)
            links_list = links   links2
            for link in links_list:
                link = re.sub("<(/)?b>", "", link)
                if not link.startswith('http'):
                    link = "http://"   link
                subdomain = urlparse.urlparse(link).netloc
                if not subdomain.endswith(self.domain):
                    continue
                if subdomain and subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        return links_list
    def should_sleep(self):
        return
    def get_page(self, num):
        return num   10
    def generate_query(self):
        if self.subdomains:
            fmt = 'site:{domain} -domain:www.{domain} -domain:{found}'
            found = ' -domain:'.join(self.subdomains[:77])
            query = fmt.format(domain=self.domain, found=found)
        else:
            query = "site:{domain}".format(domain=self.domain)
        return query
class AskEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'http://www.ask.com/web?q={query}&page={page_no}&qid=8D6EE6BF52E0C04527E51F64F22C4534&o=0&l=dir&qsrc=998&qo=pagination'
        self.engine_name = "Ask"
        self.MAX_DOMAINS = 11
        self.MAX_PAGES = 0
        enumratorBaseThreaded.__init__(self, base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.q = q
        return
    def extract_domains(self, resp):
        links_list = list()
        link_regx = re.compile('<p class="web-result-url">(.*?)</p>')
        try:
            links_list = link_regx.findall(resp)
            for link in links_list:
                if not link.startswith('http'):
                    link = "http://"   link
                subdomain = urlparse.urlparse(link).netloc
                if subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        return links_list
    def get_page(self, num):
        return num   1
    def generate_query(self):
        if self.subdomains:
            fmt = 'site:{domain} -www.{domain} -{found}'
            found = ' -'.join(self.subdomains[:self.MAX_DOMAINS])
            query = fmt.format(domain=self.domain, found=found)
        else:
            query = "site:{domain} -www.{domain}".format(domain=self.domain)
        return query
class BingEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://www.bing.com/search?q={query}&go=Submit&first={page_no}'
        self.engine_name = "Bing"
        self.MAX_DOMAINS = 30
        self.MAX_PAGES = 0
        enumratorBaseThreaded.__init__(self, base_url, self.engine_name, domain, subdomains, q=q, silent=silent)
        self.q = q
        self.verbose = verbose
        return
    def extract_domains(self, resp):
        links_list = list()
        link_regx = re.compile('<li class="b_algo"><h2><a href="https://juejin.im/post/7301574056720777251/(.*?)"')
        link_regx2 = re.compile('<div class="b_title"><h2><a href="https://juejin.im/post/7301574056720777251/(.*?)"')
        try:
            links = link_regx.findall(resp)
            links2 = link_regx2.findall(resp)
            links_list = links   links2
            for link in links_list:
                link = re.sub('<(/)?strong>|<span.*?>|<|>', '', link)
                if not link.startswith('http'):
                    link = "http://"   link
                subdomain = urlparse.urlparse(link).netloc
                if subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        return links_list
    def generate_query(self):
        if self.subdomains:
            fmt = 'domain:{domain} -www.{domain} -{found}'
            found = ' -'.join(self.subdomains[:self.MAX_DOMAINS])
            query = fmt.format(domain=self.domain, found=found)
        else:
            query = "domain:{domain} -www.{domain}".format(domain=self.domain)
        return query
class BaiduEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://www.baidu.com/s?pn={page_no}&wd={query}&oq={query}'
        self.engine_name = "Baidu"
        self.MAX_DOMAINS = 2
        self.MAX_PAGES = 760
        enumratorBaseThreaded.__init__(self, base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.querydomain = self.domain
        self.q = q
        return
    def extract_domains(self, resp):
        links = list()
        found_newdomain = False
        subdomain_list = []
        link_regx = re.compile('<a.*?class="c-showurl".*?>(.*?)</a>')
        try:
            links = link_regx.findall(resp)
            for link in links:
                link = re.sub('<.*?>|>|<|&nbsp;', '', link)
                if not link.startswith('http'):
                    link = "http://"   link
                subdomain = urlparse.urlparse(link).netloc
                if subdomain.endswith(self.domain):
                    subdomain_list.append(subdomain)
                    if subdomain not in self.subdomains and subdomain != self.domain:
                        found_newdomain = True
                        if self.verbose:
                            self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                        self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        if not found_newdomain and subdomain_list:
            self.querydomain = self.findsubs(subdomain_list)
        return links
    def findsubs(self, subdomains):
        count = Counter(subdomains)
        subdomain1 = max(count, key=count.get)
        count.pop(subdomain1, "None")
        subdomain2 = max(count, key=count.get) if count else ''
        return (subdomain1, subdomain2)
    def check_response_errors(self, resp):
        return True
    def should_sleep(self):
        time.sleep(random.randint(2, 5))
        return
    def generate_query(self):
        if self.subdomains and self.querydomain != self.domain:
            found = ' -site:'.join(self.querydomain)
            query = "site:{domain} -site:www.{domain} -site:{found} ".format(domain=self.domain, found=found)
        else:
            query = "site:{domain} -site:www.{domain}".format(domain=self.domain)
        return query
class NetcraftEnum(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        self.base_url = 'https://searchdns.netcraft.com/?restriction=site ends with&host={domain}'
        self.engine_name = "Netcraft"
        super(NetcraftEnum, self).__init__(self.base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.q = q
        return
    def req(self, url, cookies=None):
        cookies = cookies or {}
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout, cookies=cookies)
        except Exception as e:
            self.print_(e)
            resp = None
        return resp
    def should_sleep(self):
        time.sleep(random.randint(1, 2))
        return
    def get_next(self, resp):
        link_regx = re.compile('<a.*?href="https://juejin.im/post/7301574056720777251/(.*?)">Next Page')
        link = link_regx.findall(resp)
        url = 'http://searchdns.netcraft.com'   link[0]
        return url
    def create_cookies(self, cookie):
        cookies = dict()
        cookies_list = cookie[0:cookie.find(';')].split("=")
        cookies[cookies_list[0]] = cookies_list[1]
        # hashlib.sha1 requires utf-8 encoded str
        cookies['netcraft_js_verification_response'] = hashlib.sha1(urllib.unquote(cookies_list[1]).encode('utf-8')).hexdigest()
        return cookies
    def get_cookies(self, headers):
        if 'set-cookie' in headers:
            cookies = self.create_cookies(headers['set-cookie'])
        else:
            cookies = {}
        return cookies
    def enumerate(self):
        start_url = self.base_url.format(domain='example.com')
        resp = self.req(start_url)
        cookies = self.get_cookies(resp.headers)
        url = self.base_url.format(domain=self.domain)
        while True:
            resp = self.get_response(self.req(url, cookies))
            self.extract_domains(resp)
            if 'Next Page' not in resp:
                return self.subdomains
                break
            url = self.get_next(resp)
            self.should_sleep()
    def extract_domains(self, resp):
        links_list = list()
        link_regx = re.compile('<a class="results-table__host" href="https://juejin.im/post/7301574056720777251/(.*?)"')
        try:
            links_list = link_regx.findall(resp)
            for link in links_list:
                subdomain = urlparse.urlparse(link).netloc
                if not subdomain.endswith(self.domain):
                    continue
                if subdomain and subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception:
            pass
        return links_list
class DNSdumpster(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://dnsdumpster.com/'
        self.live_subdomains = []
        self.engine_name = "DNSdumpster"
        self.q = q
        self.lock = None
        super(DNSdumpster, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        return
    def check_host(self, host):
        is_valid = False
        Resolver = dns.resolver.Resolver()
        Resolver.nameservers = ['8.8.8.8', '8.8.4.4']
        self.lock.acquire()
        try:
            ip = Resolver.query(host, 'A')[0].to_text()
            if ip:
                if self.verbose:
                    self.print_("%s%s: %s%s" % (R, self.engine_name, W, host))
                is_valid = True
                self.live_subdomains.append(host)
        except:
            pass
        self.lock.release()
        return is_valid
    def req(self, req_method, url, params=None):
        params = params or {}
        headers = dict(self.headers)
        headers['Referer'] = 'https://dnsdumpster.com'
        try:
            if req_method == 'GET':
                resp = self.session.get(url, headers=headers, timeout=self.timeout)
            else:
                resp = self.session.post(url, data=params, headers=headers, timeout=self.timeout)
        except Exception as e:
            self.print_(e)
            resp = None
        return self.get_response(resp)
    def get_csrftoken(self, resp):
        csrf_regex = re.compile('<input type="hidden" name="csrfmiddlewaretoken" value="https://juejin.im/post/7301574056720777251/(.*?)">', re.S)
        token = csrf_regex.findall(resp)[0]
        return token.strip()
    def enumerate(self):
        self.lock = threading.BoundedSemaphore(value=70)
        resp = self.req('GET', self.base_url)
        token = self.get_csrftoken(resp)
        params = {'csrfmiddlewaretoken': token, 'targetip': self.domain}
        post_resp = self.req('POST', self.base_url, params)
        self.extract_domains(post_resp)
        for subdomain in self.subdomains:
            t = threading.Thread(target=self.check_host, args=(subdomain,))
            t.start()
            t.join()
        return self.live_subdomains
    def extract_domains(self, resp):
        tbl_regex = re.compile('<a name="hostanchor"></a>Host Records.*?<table.*?>(.*?)</table>', re.S)
        link_regex = re.compile('<td class="col-md-4">(.*?)<br>', re.S)
        links = []
        try:
            results_tbl = tbl_regex.findall(resp)[0]
        except IndexError:
            results_tbl = ''
        links_list = link_regex.findall(results_tbl)
        links = list(set(links_list))
        for link in links:
            subdomain = link.strip()
            if not subdomain.endswith(self.domain):
                continue
            if subdomain and subdomain not in self.subdomains and subdomain != self.domain:
                self.subdomains.append(subdomain.strip())
        return links
class Virustotal(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://www.virustotal.com/ui/domains/{domain}/subdomains?relationships=resolutions'
        self.engine_name = "Virustotal"
        self.q = q
        super(Virustotal, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        self.url = self.base_url.format(domain=self.domain)
        # Virustotal requires specific headers to bypass the bot detection:
        self.headers["X-Tool"] = "vt-ui-main"
        self.headers["X-VT-Anti-Abuse-Header"] = "hm"  # as of 1/20/2022, the content of this header doesn't matter, just its presence
        self.headers["Accept-Ianguage"] = self.headers["Accept-Language"]  # this header being present is required to prevent a captcha
        return
    # the main send_req need to be rewritten
    def send_req(self, url):
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout)
        except Exception as e:
            self.print_(e)
            resp = None
        return self.get_response(resp)
    # once the send_req is rewritten we don't need to call this function, the stock one should be ok
    def enumerate(self):
        while self.url != '':
            resp = self.send_req(self.url)
            resp = json.loads(resp)
            if 'error' in resp:
                self.print_(R   "[!] Error: Virustotal probably now is blocking our requests"   W)
                break
            if 'links' in resp and 'next' in resp['links']:
                self.url = resp['links']['next']
            else:
                self.url = ''
            self.extract_domains(resp)
        return self.subdomains
    def extract_domains(self, resp):
        #resp is already parsed as json
        try:
            for i in resp['data']:
                if i['type'] == 'domain':
                    subdomain = i['id']
                    if not subdomain.endswith(self.domain):
                        continue
                    if subdomain not in self.subdomains and subdomain != self.domain:
                        if self.verbose:
                            self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                        self.subdomains.append(subdomain.strip())
        except Exception:
            pass
class ThreatCrowd(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://www.threatcrowd.org/searchApi/v2/domain/report/?domain={domain}'
        self.engine_name = "ThreatCrowd"
        self.q = q
        super(ThreatCrowd, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        return
    def req(self, url):
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout)
        except Exception:
            resp = None
        return self.get_response(resp)
    def enumerate(self):
        url = self.base_url.format(domain=self.domain)
        resp = self.req(url)
        self.extract_domains(resp)
        return self.subdomains
    def extract_domains(self, resp):
        try:
            links = json.loads(resp)['subdomains']
            for link in links:
                subdomain = link.strip()
                if not subdomain.endswith(self.domain):
                    continue
                if subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception as e:
            pass
class CrtSearch(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://crt.sh/?q=%.{domain}'
        self.engine_name = "SSL Certificates"
        self.q = q
        super(CrtSearch, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        return
    def req(self, url):
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout)
        except Exception:
            resp = None
        return self.get_response(resp)
    def enumerate(self):
        url = self.base_url.format(domain=self.domain)
        resp = self.req(url)
        if resp:
            self.extract_domains(resp)
        return self.subdomains
    def extract_domains(self, resp):
        link_regx = re.compile('<TD>(.*?)</TD>')
        try:
            links = link_regx.findall(resp)
            for link in links:
                link = link.strip()
                subdomains = []
                if '<BR>' in link:
                    subdomains = link.split('<BR>')
                else:
                    subdomains.append(link)
                for subdomain in subdomains:
                    if not subdomain.endswith(self.domain) or '*' in subdomain:
                        continue
                    if '@' in subdomain:
                        subdomain = subdomain[subdomain.find('@') 1:]
                    if subdomain not in self.subdomains and subdomain != self.domain:
                        if self.verbose:
                            self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                        self.subdomains.append(subdomain.strip())
        except Exception as e:
            print(e)
            pass
class PassiveDNS(enumratorBaseThreaded):
    def __init__(self, domain, subdomains=None, q=None, silent=False, verbose=True):
        subdomains = subdomains or []
        base_url = 'https://api.sublist3r.com/search.php?domain={domain}'
        self.engine_name = "PassiveDNS"
        self.q = q
        super(PassiveDNS, self).__init__(base_url, self.engine_name, domain, subdomains, q=q, silent=silent, verbose=verbose)
        return
    def req(self, url):
        try:
            resp = self.session.get(url, headers=self.headers, timeout=self.timeout)
        except Exception as e:
            resp = None
        return self.get_response(resp)
    def enumerate(self):
        url = self.base_url.format(domain=self.domain)
        resp = self.req(url)
        if not resp:
            return self.subdomains
        self.extract_domains(resp)
        return self.subdomains
    def extract_domains(self, resp):
        try:
            subdomains = json.loads(resp)
            for subdomain in subdomains:
                if subdomain not in self.subdomains and subdomain != self.domain:
                    if self.verbose:
                        self.print_("%s%s: %s%s" % (R, self.engine_name, W, subdomain))
                    self.subdomains.append(subdomain.strip())
        except Exception as e:
            pass
class portscan():
    def __init__(self, subdomains, ports):
        self.subdomains = subdomains
        self.ports = ports
        self.lock = None
    def port_scan(self, host, ports):
        openports = []
        self.lock.acquire()
        for port in ports:
            try:
                s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
                s.settimeout(2)
                result = s.connect_ex((host, int(port)))
                if result == 0:
                    openports.append(port)
                s.close()
            except Exception:
                pass
        self.lock.release()
        if len(openports) > 0:
            print("%s%s%s - %sFound open ports:%s %s%s%s" % (G, host, W, R, W, Y, ', '.join(openports), W))
    def run(self):
        self.lock = threading.BoundedSemaphore(value=20)
        for subdomain in self.subdomains:
            t = threading.Thread(target=self.port_scan, args=(subdomain, self.ports))
            t.start()
def main(domain, threads, savefile, ports, silent, verbose, enable_bruteforce, engines):
    bruteforce_list = set()
    search_list = set()
    if is_windows:
        subdomains_queue = list()
    else:
        subdomains_queue = multiprocessing.Manager().list()
    # Check Bruteforce Status
    if enable_bruteforce or enable_bruteforce is None:
        enable_bruteforce = True
    # Validate domain
    domain_check = re.compile("^(http|https)?[a-zA-Z0-9] ([-.]{1}[a-zA-Z0-9] )*.[a-zA-Z]{2,}$")
    if not domain_check.match(domain):
        if not silent:
            print(R   "Error: Please enter a valid domain"   W)
        return []
    if not domain.startswith('http://') or not domain.startswith('https://'):
        domain = 'http://'   domain
    parsed_domain = urlparse.urlparse(domain)
    if not silent:
        print(B   "[-] Enumerating subdomains now for %s" % parsed_domain.netloc   W)
    if verbose and not silent:
        print(Y   "[-] verbosity is enabled, will show the subdomains results in realtime"   W)
    supported_engines = {'baidu': BaiduEnum,
                         'yahoo': YahooEnum,
                         'google': GoogleEnum,
                         'bing': BingEnum,
                         'ask': AskEnum,
                         'netcraft': NetcraftEnum,
                         'dnsdumpster': DNSdumpster,
                         'virustotal': Virustotal,
                         'threatcrowd': ThreatCrowd,
                         'ssl': CrtSearch,
                         'passivedns': PassiveDNS
                         }
    chosenEnums = []
    if engines is None:
        chosenEnums = [
            BaiduEnum, YahooEnum, GoogleEnum, BingEnum, AskEnum,
            NetcraftEnum, DNSdumpster, Virustotal, ThreatCrowd,
            CrtSearch, PassiveDNS
        ]
    else:
        engines = engines.split(',')
        for engine in engines:
            if engine.lower() in supported_engines:
                chosenEnums.append(supported_engines[engine.lower()])
    # Start the engines enumeration
    enums = [enum(domain, [], q=subdomains_queue, silent=silent, verbose=verbose) for enum in chosenEnums]
    for enum in enums:
        enum.start()
    for enum in enums:
        enum.join()
    subdomains = set(subdomains_queue)
    for subdomain in subdomains:
        search_list.add(subdomain)
    if enable_bruteforce:
        if not silent:
            print(G   "[-] Starting bruteforce module now using subbrute.."   W)
        record_type = False
        path_to_file = os.path.dirname(os.path.realpath(__file__))
        subs = os.path.join(path_to_file, 'subbrute', 'names.txt')
        resolvers = os.path.join(path_to_file, 'subbrute', 'resolvers.txt')
        process_count = threads
        output = False
        json_output = False
        bruteforce_list = subbrute.print_target(parsed_domain.netloc, record_type, subs, resolvers, process_count, output, json_output, search_list, verbose)
    subdomains = search_list.union(bruteforce_list)
    if subdomains:
        subdomains = sorted(subdomains, key=subdomain_sorting_key)
        if savefile:
            write_file(savefile, subdomains)
        if not silent:
            print(Y   "[-] Total Unique Subdomains Found: %s" % len(subdomains)   W)
        if ports:
            if not silent:
                print(G   "[-] Start port scan now for the following ports: %s%s" % (Y, ports)   W)
            ports = ports.split(',')
            pscan = portscan(subdomains, ports)
            pscan.run()
        elif not silent:
            for subdomain in subdomains:
                print(G   subdomain   W)
    return subdomains
def interactive():
    args = parse_args()
    domain = args.domain
    threads = args.threads
    savefile = args.output
    ports = args.ports
    enable_bruteforce = args.bruteforce
    verbose = args.verbose
    engines = args.engines
    if verbose or verbose is None:
        verbose = True
    if args.no_color:
        no_color()
    banner()
    res = main(domain, threads, savefile, ports, silent=False, verbose=verbose, enable_bruteforce=enable_bruteforce, engines=engines)
if __name__ == "__main__":
    interactive()

github.com/aboul3la/Su…

pip install -r requirements.txt

git clone https://github.com/aboul3la/Sublist3r.git  #下载
cd Sublist3r 
pip install -r requirements.txt   #装置依赖项完结装置

提示:以上办法为爆破子域名,由于字典比较强壮,所以功率较高。

协助文档

D:Desktoptools信息搜集篇东西Sublist3r-master>python sublist3r.py -h
usage: sublist3r.py [-h] -d DOMAIN [-b [BRUTEFORCE]] [-p PORTS] [-v [VERBOSE]] [-t THREADS] [-e ENGINES] [-o OUTPUT]
                    [-n]
OPTIONS:
  -h, --help            show this help message and exit
  -d DOMAIN, --domain DOMAIN
                        Domain name to enumerate it's subdomains
  -b [BRUTEFORCE], --bruteforce [BRUTEFORCE]
                        Enable the subbrute bruteforce module
  -p PORTS, --ports PORTS
                        Scan the found subdomains against specified tcp ports
  -v [VERBOSE], --verbose [VERBOSE]
                        Enable Verbosity and display results in realtime
  -t THREADS, --threads THREADS
                        Number of threads to use for subbrute bruteforce
  -e ENGINES, --engines ENGINES
                        Specify a comma-separated list of search engines
  -o OUTPUT, --output OUTPUT
                        Save the results to text file
  -n, --no-color        Output without color
Example: python sublist3r.py -d google.com

中文翻译

-h :协助
-d :指定主域名枚举子域名
-b :调用subbrute暴力枚举子域名
-p :指定tpc端口扫描子域名
-v :显示实时详细信息成果
-t :指定线程
-e :指定查找引擎
-o :将成果保存到文本
-n :输出不带色彩

默许参数扫描子域名

python sublist3r.py -d baidu.com

运用暴力枚举子域名

python sublist3r -b -d baidu.com

python2.7.14 环境

;C:Python27;C:Python27Scripts

OneForALL

pip3 install –user -r requirements.txt -i mirrors.aliyun.com/pypi/simple…

python3 oneforall.py –target baidu.com run /*搜集*/

[外链图片转存失利,源站或许有防盗链机制,主张将图片保存下来直接上传(img-SuGhvJ4B-1669458115189)(xiaox1ao.github.io/images/imag…

爆破子域名

Example:

brute.py –target domain.com –word True run

brute.py –targets ./domains.txt –word True run

brute.py –target domain.com –word True –concurrent 2000 run

brute.py –target domain.com –word True –wordlist subnames.txt run

brute.py –target domain.com –word True –recursive True –depth 2 run

brute.py –target d.com –fuzz True –place m.*.d.com –rule ‘[a-z]’ run

brute.py –target d.com –fuzz True –place m.*.d.com –fuzzlist subnames.txt run

[外链图片转存失利,源站或许有防盗链机制,主张将图片保存下来直接上传(img-R33qRISE-1669458115189)(xiaox1ao.github.io/images/imag…

Wydomain

dnsburte.py -d aliyun.com -f dnspod.csv -o aliyun.log

wydomain.py -d aliyun.com

FuzzDomain

史上最全的信息搜集总结!!.md

躲藏域名hosts磕碰

躲藏财物勘探-hosts磕碰

mp.weixin.qq.com/s/fuASZODw1…

许多时分拜访方针财物IP呼应多为:401、403、404、500,可是用域名恳求却能回来正常的业务体系(制止IP直接拜访),因

为这大大都都是需求绑定host才干正常恳求拜访的(目前互联网公司根本的做法)(域名删除了A记载,可是反代的配置未更新),那么咱们就能够经过搜集到的方针的域名 和 方针财物 的IP段组合起来,以 IP段 域名的办法进行捆绑磕碰,就能发现许多有意思的东西。

在发送http恳求的时分,对域名和IP列表进行配对,然后遍历发送恳求(就相当于修改了本地的hosts文件相同),并把相应的title和呼应包巨细拿回来做比照,即可快速发现一些荫蔽的财物。

进行hosts磕碰需求方针的域名和方针的相关IP作为字典

域名就不说了

相关IP来历有:

方针域名前史解析IP

site.ip138.com/

ipchaxun.com/

史上最全的信息搜集总结!!.md

ip正则

www.aicesu.cn/reg/

[外链图片转存失利,源站或许有防盗链机制,主张将图片保存下来直接上传(img-u5IuOIDP-1669458115191)(xiaox1ao.github.io/images/imag…

共找到 24 处匹配:
18.162.220.166
47.243.238.59
18.167.169.187
47.243.238.59
18.167.169.187
18.162.220.166
143.152.14.32
18.167.185.63
218.162.122.243
54.251.129.116
18.158.204.42
18.193.247.244
15.207.160.111
52.51.89.237
54.79.36.20
18.158.248.164
18.193.198.127
16.162.59.31
18.162.182.4
18.166.5.64
128.14.246.28
213.214.66.190
143.223.115.185
023.20.239.12

运用Hosts_scan

史上最全的信息搜集总结!!.md

多线程

史上最全的信息搜集总结!!.md

端口扫描

当确定了方针大约的ip段后,能够先对ip的敞开端口进行勘探,一些特定服务或许开起在默许端口上,勘探敞开端口有利于快速搜集方针财物,找到方针网站的其他功用站点。

msscan端口扫描

sudo msscan -p 1-65535 192.168.19.151 --rate=1000

gitee.com/youshusoft/…

Scanner.exe 192.168.19.151 1-65535 512

史上最全的信息搜集总结!!.md

御剑端口扫描东西

史上最全的信息搜集总结!!.md

nmap扫描端口和勘探端口信息

常用参数,如:

nmap -sV 192.168.0.2
nmap -sT 92.168.0.2
nmap -Pn -A -sC 192.168.0.2
nmap -sU -sT -p0-65535 192.168.122.1

用于扫描方针主机服务版别号与敞开的端口

假如需求扫描多个ip或ip段,能够将他们保存到一个txt文件中

nmap -iL ip.txt

来扫描列表中一切的ip。

Nmap为端口勘探最常用的办法,操作便利,输出成果十分直观。

在线端口检测

coolaf.com/tool/port

史上最全的信息搜集总结!!.md

端口扫描器

御剑,msscan,zmap等

御剑高速端口扫描器:

史上最全的信息搜集总结!!.md

填入想要扫描的ip段(假如只扫描一个ip,则开端IP和完毕IP填一个即可),能够挑选不改默许端口列表,也能够挑选自己指定端口。

浸透端口

21,22,23,1433,152,3306,3389,5432,5900,50070,50030,50000,27017,27018,11211,9200,9300,7001,7002,6379,5984,873,443,8000-9090,80-89,80,10000,8888,8649,8083,8080,8089,9090,7778,7001,7002,6082,5984,4440,3312,3311,3128,2601,2604,2222,2082,2083,389,88,512,513,514,1025,111,1521,445,135,139,53

浸透常见端口及对应服务

1.web类(web缝隙/灵敏目录)

第三方通用组件缝隙struts thinkphp jboss ganglia zabbix

80 web

80-89 web

8000-9090 web

2.数据库类(扫描弱口令)

1433 MSSQL

1521 Oracle

3306 MySQL

5432 PostgreSQL

3.特殊服务类(未授权/指令履行类/缝隙)

443 SSL心脏滴血

873 Rsync未授权

5984 CouchDB http://xxx:5984/\_utils/

6379 redis未授权

7001,7002 WebLogic默许弱口令,反序列

9200,9300 elasticsearch 参阅WooYun: 多玩某服务器ElasticSearch指令履行缝隙

11211 memcache未授权拜访

27017,27018 Mongodb未授权拜访

50000 SAP指令履行

50070,50030 hadoop默许端口未授权拜访

4.常用端口类(扫描弱口令/端口爆破)

21 ftp

22 SSH

23 Telnet

2601,2604 zebra路由,默许暗码zebra

3389 远程桌面

5.端口合计概况

21 ftp

22 SSH

23 Telnet

80 web

80-89 web

161 SNMP

389 LDAP

443 SSL心脏滴血以及一些web缝隙测验

445 SMB

512,513,514 Rexec

873 Rsync未授权

1025,111 NFS

1433 MSSQL

1521 Oracle:(iSqlPlus Port:5560,7778)

2082/2083 cpanel主机办理体系登陆 (国外用较多)

2222 DA虚拟主机办理体系登陆 (国外用较多)

2601,2604 zebra路由,默许暗码zebra

3128 squid署理默许端口,假如没设置口令很或许就直接周游内网了

3306 MySQL

3312/3311 kangle主机办理体系登陆

3389 远程桌面

4440 rundeck 参阅WooYun: 借用新浪某服务成功周游新浪内网

5432 PostgreSQL

5900 vnc

5984 CouchDB http://xxx:5984/\_utils/

6082 varnish 参阅WooYun: Varnish HTTP accelerator CLI 未授权拜访易导致网站被直接篡改或许作为署理进入内网

6379 redis未授权

7001,7002 WebLogic默许弱口令,反序列

7778 Kloxo主机控制面板登录

8000-9090 都是一些常见的web端口,有些运维喜爱把办理后台开在这些非80的端口上

8080 tomcat/WDCP主机办理体系,默许弱口令

8080,8089,9090 JBOSS

8083 Vestacp主机办理体系 (国外用较多)

8649 ganglia

8888 amh/LuManager 主机办理体系默许端口

9200,9300 elasticsearch 参阅WooYun: 多玩某服务器ElasticSearch指令履行缝隙

10000 Virtualmin/Webmin 服务器虚拟主机办理体系

11211 memcache未授权拜访

27017,27018 Mongodb未授权拜访

28017 mongodb统计页面

50000 SAP指令履行

50070,50030 hadoop默许端口未授权拜访

常见的端口和进犯办法

史上最全的信息搜集总结!!.md

Nmap,Msscan扫描等

例如:nmap -p 80,443,8000,8080 -Pn 192.168.0.0/24

常见端口表

21,22,23,80-90,161,389,443,445,873,1099,1433,1521,1900,2082,2083,2222,2601,2604,3128,3306,3311,3312,3389,4440,4848,5432,5560,5900,5901,5902,6082,6379,7001-7010,7778,8080-8090,8649,8888,9000,9200,10000,11211,27017,28017,50000,50030,50060,135,139,445,53,88

查找实在ip

[[绕过CDN找实在IP]]

假如方针网站运用了CDN,运用了cdn实在的ip会被躲藏,假如要查找实在的服务器就必须获取实在的ip,依据这个ip持续查询

旁站。

留意:许多时分,主站虽然是用了CDN,但子域名或许没有运用CDN,假如主站和子域名在一个ip段中,那么找到子域名的真

实ip也是一种途径。

多地ping承认是否运用CDN

ping.chinaz.com/

ping.aizhan.com/

史上最全的信息搜集总结!!.md
{width=“5.767361111111111in” height=“3.925in”}

查询前史DNS解析记载

在查询到的前史解析记载中,最早的前史解析ip很有或许记载的便是实在ip,快速查找实在IP引荐此办法,但并不是一切网站都能查到。

DNSDB

dnsdb.io/zh-cn/

微步在线

x.threatbook.cn/

史上最全的信息搜集总结!!.md

Ipip.net

tools.ipip.net/cdn.php

史上最全的信息搜集总结!!.md

viewdns

viewdns.info/

史上最全的信息搜集总结!!.md

phpinfo

假如方针网站存在phpinfo走漏等,能够在phpinfo中的SERVER_ADDR或_SERVER[“SERVER_ADDR”]找到实在ip

绕过CDN

绕过CDN的多种办法详细能够参阅
www.cnblogs.com/qiudabai/p/…

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

旁站和C段

旁站往往存在业务功用站点,主张先搜集已有IP的旁站,再勘探C段,承认C段方针后,再在C段的基础上再搜集一次旁站。

旁站是和已知方针站点在同一服务器但不同端口的站点,经过以下办法查找到旁站后,先拜访一下确定是不是自己需求的站点信息。

站长之家

同ip网站查询stool.chinaz.com/same

chapangzhan.com/

google hacking

blog.csdn.net/qq\_361191…

网络空间查找引擎

如FOFA查找旁站和C段

该办法功率较高,并能够直观地看到站点标题,但也有不常见端口未录入的情况,虽然这种情况很少,但之后补充财物的时分能够用下面的办法nmap扫描再搜集一遍。

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

在线c段 webscan.cc

webscan.cc

c.webscan.cc/

[外链图片转存失利,源站或许有防盗链机制,主张将图片保存下来直接上传(img-BHuurLQY-1669458115202)(xiaox1ao.github.io/images/imag…

c段使用脚本

pip install requests

#coding:utf-8
import requests
import json
def get_c(ip):
    print("正在搜集{}".format(ip))
    url="http://api.webscan.cc/?action=query&ip={}".format(ip)
    req=requests.get(url=url)
    html=req.text
    data=req.json()
    if 'null' not in html:
        with open("resulit.txt", 'a', encoding='utf-8') as f:
            f.write(ip   'n')
            f.close()
        for i in data:
            with open("resulit.txt", 'a',encoding='utf-8') as f:
                f.write("t{} {}n".format(i['domain'],i['title']))
                print("     [ ] {} {}[ ]".format(i['domain'],i['title']))
                f.close()
def get_ips(ip):
    iplist=[]
    ips_str = ip[:ip.rfind('.')]
    for ips in range(1, 256):
        ipadd=ips_str   '.'   str(ips)
        iplist.append(ipadd)
    return iplist
ip=input("请你输入要查询的ip:")
ips=get_ips(ip)
for p in ips:
    get_c(p)

留意:勘探C段时一定要承认ip是否归归于方针,由于一个C段中的一切ip不一定全部归于方针。

网络空间查找引擎

假如想要在短时间内快速搜集财物,那么使用网络空间查找引擎是不错的挑选,能够直观地看到旁站,端口,站点标题,IP等信息,点击罗列出站点能够直接拜访,以此来判断是否为自己需求的站点信息。FOFA的常用语法

1、同IP旁站:ip=“192.168.0.1”

2、C段:ip=“192.168.0.0/24”

3、子域名:domain=“baidu.com”

4、标题/关键字:title=“百度”

5、假如需求将成果缩小到某个城市的规模,那么能够拼接句子

title=“百度”&& region=“Beijing”

  1. 特征:body=”百度”或header=“baidu”

扫描灵敏目录/文件

扫描灵敏目录需求强壮的字典,需求平时堆集,具有强壮的字典能够更高效地找出网站的办理后台,灵敏文件常见的如.git文件

走漏,.svn文件走漏,phpinfo走漏等,这一步一半交给各类扫描器就能够了,将方针站点输入到域名中,挑选对应字典类型,

就能够开端扫描了,十分便利。

御剑

www.fujieace.com/hacker/tool…

史上最全的信息搜集总结!!.md

7kbstorm

github.com/7kbstorm/7k…

史上最全的信息搜集总结!!.md

bbscan

github.com/lijiejie/BB…

在pip现已装置的前提下,能够直接:

pip install -r requirements.txt

运用示例:

1. 扫描单个web服务 www.target.com

python BBScan.py --host www.target.com

2. 扫描www.target.com和www.target.com/28下的其他主机

python BBScan.py --host www.target.com --network 28

3. 扫描txt文件中的一切主机

python BBScan.py -f wandoujia.com.txt

4. 从文件夹中导入一切的主机并扫描

python BBScan.py -d targets/

–network
参数用于设置子网掩码,小公司设为2830,中等规模公司设置2628,大公司设为24~26

当然,尽量防止设为24,扫描过于耗时,除非是想在各SRC多刷几个缝隙。

该插件是从内部扫描器中抽离出来的,感谢 Jekkay Hu<34538980[at]qq.com>

假如你有十分有用的规矩,请找几个网站验证测验后,再 pull request

脚本还会优化,接下来的事:

添加有用规矩,将规矩更好地分类,细化

后续能够直接从 rulesrequest 文件夹中导入HTTP_request

优化扫描逻辑

dirmap

pip install -r requirement.txt

github.com/H4ckForJob/…

单个方针

python3 dirmap.py -i https://target.com -lcf

多个方针

python3 dirmap.py -iF urls.txt -lcf

dirsearch

gitee.com/Abaomiangua…

unzip dirsearch.zip

python3 dirsearch.py -u http://m.scabjd.com/ -e *

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

gobuster

sudo apt-get install gobuster

gobuster dir -u www.servyou.com.cn/ -w usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -x php -t 50

dir -u 网址 w字典 -x 指定后缀 -t 线程数量

dir -u www.servyou.com.cn/ -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt -x “php,html,rar,zip” -d –wildcard -o servyou.log | grep ^“3402”

网站文件

[[常见灵敏文件走漏]]

1. robots.txt

2. crossdomin.xml

3. sitemap.xml

4. 后台目录

5. 网站装置包

6. 网站上传目录

7. mysql办理页面

8. phpinfo

9. 网站文本编辑器

10. 测验文件

11. 网站备份文件(.rar、zip、.7z、.tar.gz、.bak)

12. DS_Store 文件

13. vim编辑器备份文件(.swp)

14. WEB—INF/web.xml文件

15 .git

16 .svn

www.secpulse.com/archives/55…

扫描网页备份

例如

config.php

config.php~

config.php.bak

config.php.swp

config.php.rar

conig.php.tar.gz

网站头信息搜集

1、中间件 :web服务【Web Servers】 apache iis7 iis7.5 iis8 nginx WebLogic tomcat

2.、网站组件: js组件jquery、vue 页面的布局bootstrap 经过浏览器获取

史上最全的信息搜集总结!!.md

whatweb.bugscaner.com/look/

史上最全的信息搜集总结!!.md

火狐的插件Wappalyzer

史上最全的信息搜集总结!!.md

curl指令查询头信息

curl www.moonsec.com -i

史上最全的信息搜集总结!!.md

灵敏文件查找

GitHub查找

in:name test #库房标题查找含有关键字test

in:descripton test #库房描述查找含有关键字

in:readme test #Readme文件搜素含有关键字

查找某些体系的暗码

github.com/search?q=sm…

github 关键词监控

www.codercto.com/a/46640.htm…

谷歌查找

site:Github.com sa password

site:Github.com root password

site:Github.com User ID=‘sa’;Password

site:Github.com inurl:sql

SVN 信息搜集

site:Github.com svn

site:Github.com svn username

site:Github.com svn password

site:Github.com svn username password

综合信息搜集

site:Github.com password

site:Github.com ftp ftppassword

site:Github.com 暗码

site:Github.com 内部

blog.csdn.net/qq\_361191…

www.361way.com/github-hack…

docs.github.com/cn/github/s…

github.com/search?q=sm…

Google-hacking

site:域名

inurl: url中存在的关键字网页

intext:网页正文中的关键词

filetype:指定文件类型

mp.weixin.qq.com/s/2UJ-wjq44…

zhuanlan.zhihu.com/p/491708780

wooyun缝隙库

wooyun.website/

网盘查找

凌云查找 www.lingfengyun.com/

盘多多:www.panduoduo.net/

盘搜搜:www.pansoso.com/

盘搜:www.pansou.com/

社工库

姓名/常用id/邮箱/暗码/电话 登录 网盘 网站 邮箱 找灵敏信息

tg机器人

史上最全的信息搜集总结!!.md

网站注册信息

www.reg007.com 查询网站注册信息

史上最全的信息搜集总结!!.md

一般是合作社工库一起来运用。

js灵敏信息

  1. 网站的url衔接写到js里边

  2. js的api接口 里边包含用户信息 比方 账号和暗码

jsfinder

gitee.com/kn1fes/JSFi…

python3 JSFinder.py -u https://www.mi.com
python3 JSFinder.py -u https://www.mi.com -d
python3 JSFinder.py -u https://www.mi.com -d -ou mi_url.txt -os mi_subdomain.txt

史上最全的信息搜集总结!!.md

当你想获取更多信息的时分,能够运用-d进行深度爬取来获得更多内容,并运用指令-ou, -os来指定URL和子域名所保存的文件名

批量指定URL和JS链接来获取里边的URL。

指定URL:

python JSFinder.py -f text.txt

指定JS:

python JSFinder.py -f text.txt -j

Packer-Fuzzer

寻觅网站交互接口 授权key

随着WEB前端打包东西的盛行,您在日常浸透测验、安全服务中是否遇到越来越多以Webpack打包器为代表的网站?这类打包

器会将整站的API和API参数打包在一起供Web集中调用,这也便于咱们快速发现网站的功用和API清单,但往往这些打包器所生

成的JS文件数量反常之多而且总JS代码量反常庞大(多达上万行),这给咱们的手工测验带来了极大的不便,Packer Fuzzer软

件应运而生。

本东西支撑主动模糊提取对应方针站点的API以及API对应的参数内容,并支撑对:未授权拜访、灵敏信息走漏、CORS、SQL注

入、水平越权、弱口令、恣意文件上传七大缝隙进行模糊高效的快速检测。在扫描完毕之后,本东西还支撑主动生成扫描报告,

您能够挑选便于剖析的HTML版别以及较为正规的doc、pdf、txt版别。

sudo apt-get install nodejs && sudo apt-get install npm
git clone https://gitee.com/keyboxdzd/Packer-Fuzzer.git
pip3 install -r requirements.txt
python3 PackerFuzzer.py -u https://www.liaoxuefeng.com

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

SecretFinder

一款根据Python脚本的JavaScript灵敏信息查找东西

gitee.com/mucn/Secret…

python SecretFinder.py -i https://www.moonsec.com/ -e

cms辨认

搜集好网站信息之后,应该对网站进行指纹辨认,经过辨认指纹,确定方针的cms及版别,便利拟定下一步的测验计划,能够用公开的poc或自己累积的对应手法等进行正式的浸透测验。

云悉

www.yunsee.cn/info.html

潮汐指纹

finger.tidesec.net/

CMS指纹辨认

whatweb.bugscaner.com/look/

github.com/search?q=cm…

whatcms

史上最全的信息搜集总结!!.md

御剑cms辨认

github.com/ldbfpiaoran…

github.com/theLSA/cmsI…

十分规操作

1、假如找到了方针的一处财物,可是对方针其他财物的搜集无处下手时,能够检查一下该站点的body里是否有方针的特征,然后使用网络空间查找引擎(如fofa等)对该特征进行查找,如:body=”XX公司”或body=”baidu”等。

该办法一般适用于特征显着,财物数量较多的方针,而且许多时分效果拔群。

2、当经过上述办法的找到test.com的特征后,再进行body的查找,然后再查找到test.com的时分,此刻fofa上显示的ip大约率为test.com的实在IP。

3、假如需求对政府网站作为方针,那么在批量获取网站主页的时分,能够用上

http://114.55.181.28/databaseInfo/index

之后能够结合上一步的办法进行进一步的信息搜集。

SSL/TLS证书查询

SSL/TLS证书一般包含域名、子域名和邮件地址等信息,结合证书中的信息,能够更快速地定位到方针财物,获取到更多方针财物的相关信息。

myssl.com/

crt.sh

censys.io

developers.facebook.com/tools/ct/

google.com/transparenc…

SSL证书查找引擎:

certdb.com/domain/gith…

crt.sh/?Identity=%…

censys.io/

GetDomainsBySSL.py

史上最全的信息搜集总结!!.md

查找厂商ip段

ipwhois.cnnic.net.cn/index.jsp

移动财物搜集

微信小程序支付宝小程序

现在许多企业都有小程序,能够重视企业的微信公众号或许支付宝小程序,或重视运营相关人员,检查朋友圈,获取小程序。

weixin.sogou.com/weixin?type…

史上最全的信息搜集总结!!.md

史上最全的信息搜集总结!!.md

app软件查找

www.qimai.cn/

史上最全的信息搜集总结!!.md

社交信息查找

QQ群 QQ手机号

微信群

领英

www.linkedin.com/

脉脉招聘

boss招聘

js灵敏文件

github.com/m4ll0k/Secr…

github.com/Threezh1/JS…

github.com/rtcatc/Pack…

github信息走漏监控

github.com/0xbug/Hawke…

github.com/MiSecurity/…

github.com/VKSRC/Githu…

防护软件搜集

安全防护 云waf、硬件waf、主机防护软件、软waf

社工相关

微信或许QQ
混入内部群,考察观测。加客服小姐姐发一些衔接。进一步获取灵敏信息。测验产品,购买服务器,拿去测验账号和暗码。

物理触摸

到企业办公层衔接wifi,连同内网。丢一些带有后门的usb
敞开免费的wifi截取账号和暗码。

社工库

在tg找社工机器人 查找暗码信息
或本地的社工库查找邮箱或许用户的暗码或密文。组合暗码在进行猜解登录。

财物搜集神器

ARL(Asset Reconnaissance Lighthouse)财物侦察灯塔体系

github.com/TophantTech…

git clone https://github.com/TophantTechnology/ARL
cd ARL/docker/
docker volume create arl_db
docker-compose pull
docker-compose up -d 

AssetsHunter

github.com/rabbitmask/…

一款用于src财物信息搜集的东西

github.com/sp4rkw/Reap…

domain_hunter_pro

github.com/bit4woo/dom…

LangSrcCurise

github.com/shellsec/La…

网段财物

github.com/colodoo/mid…

东西

Fuzz字典引荐:github.com/TheKingOfDu…

BurpCollector(BurpSuite参数搜集插件):github.com/TEag1e/Burp…

Wfuzz:github.com/xmendez/wfu…

LinkFinder:github.com/GerbenJavad…

PoCBox:github.com/Acmesec/PoC…