Don‘t scan, just ask
A new approach of identifying vulnerable web applications




Fabian Mihailowitsch               28th Chaos Communication Congress, 12/28/11 - Berlin
Summary

It’s about identifying web applications and systems…
Classical network reconnaissance techniques mostly rely on technical facts, like net blocks or
URLs. They don’t take the business connections into view and hence don’t identify all potential targets.
Therefore ‘Spider-Pig’ was developed. A tool that searches for web applications connected to a
company based on a scored keyword list and a crawler. That way web application run by satellite
companies or service providers can be identified as well.




TABLE OF CONTENTS
 Overview on classical network reconnaissance
 Restrictions and possible enhancements
 Presentation of ‘Spider-Pig’ and its approach
 Statistics of a real-life scenario
 Conclusion




Fabian Mihailowitsch                                   28th Chaos Communication Congress, 12/28/11 - Berlin
Classical network reconnaissance


 TECHNIQUES
  Query NIC and RIPE databases
  Reverse DNS
  DNS Zone Transfer
  Research in forums, social networks, …
  Google hacking
  Identify routing and hence new systems
  Brute-Force DNS
 …

 TOOLS
  whois, dig, web browser, Maltego, …




Fabian Mihailowitsch                     28th Chaos Communication Congress, 12/28/11 - Berlin
How Business looks like…

                       Company A.0                            THE TARGET :)




SATELLITE COMPANIES
 Branch office
 Partner firm

 Company A.1




                                           SERVICE PROVIDER       Company B
                                            Payment Provider
                                            Separate Webhoster
                                            Marketing Campaign
                                            Service Center
                                           …




Fabian Mihailowitsch                 28th Chaos Communication Congress, 12/28/11 - Berlin
…what we see with network reconnaissance

                       Company A.0                            ALL (?)
                                                               Net blocks
                                                               IP addresses
                                                               URLs
                                                               DNS information
SOME (!)
 Net blocks
 IP addresses
 URLs
 DNS information
Company A.1




                                                                  Company B




Fabian Mihailowitsch                 28th Chaos Communication Congress, 12/28/11 - Berlin
Restrictions and possible enhancements

Restrictions                                    Enhancements
Scans solely on a technical level:              For an attacker it doesn’t count whether he
- IPs                                           attacks company A or B:
- Net blocks                                    - Access to sensitive data
- URLs                                          - Reputational damage
                                                -…
No business relations are taken into account.
                                                Take business relationships into account.
Potentially incomplete view:
You can only search for what you know              Build a logical network between systems,
(e.g. RIPE queries).                               independent of the owner.

Probably just one company in focus.               Identify (all) applications / systems linked
                                                to a company based on a business
                                                connection level, independent of provider.




Not all targets are identified!                 Development of ‘Spider-Pig’.


Fabian Mihailowitsch                            28th Chaos Communication Congress, 12/28/11 - Berlin
Approach of Spider-Pig
Spider-Pig identifies web applications connected to a company in an automated manner and hence
allows to identify potential targets (web applications and systems) for further attacks.
Due to German laws the tool will not be released to public. But its methodology will be explained…


     1. Model Business        2. Perform Search         3. Calculate Results             4. Exploit


Model the business,       Based on the               Duplicates are              Targets are sent to a
services and              keywords, different        removed from the            custom vulnerability
relationships of the      search engines a           results. For each           scanner that uses the
company in scope.         queried in an              domain a score is           data from the crawling
                          automated manner.          calculated and a sorted     phase and attacks the
Based on a scored                                    list with the results is    web sites in an
keyword list with         Furthermore ‘whois         generated.                  automated manner.
terms, like company       information’ are
name, product names,      gathered for the           Top results are             At the end, we
tax numbers.              identified domains and     reviewed manually and       hopefully get a shell! :)
                          IP addresses.              potential targets are
Note: No technical                                   extracted.
details (e.g. IPs) have   Identified web sites are
to be provided.           crawled for links to
                          new web pages.

Fabian Mihailowitsch                                  28th Chaos Communication Congress, 12/28/11 - Berlin
First step: Model Business

Idea
 Web applications connected to a company contain terms like company name, products or imprint.
 There are imprecise terms like products names that lead to many web sites (e.g. Amazon).
 There are precise and very specific terms like tax numbers or imprints that lead to company pages.


Solution
 Manually assemble a list that contains terms specific to the company in focus.
 Assign each entry a score, how unique and specific it is.




Note: Experience showed that ~200 terms are sufficient and already lead to hundreds thousands of
results. Technical details (e.g. URLs or software used) can be used but don’t have to. A score from 1-100
works pretty well, but can be adopted.


Fabian Mihailowitsch                                    28th Chaos Communication Congress, 12/28/11 - Berlin
Second step: Perform Search

Idea
 Keywords are used to query different search engines (e.g. Google, Bing, Yahoo).
 Results are read and a list is built, where the fully qualified domain name (FQDN) is the index.
 Every time the FQDN is part of a search result, the score assigned to this search item is added.
 For the IP and domain name a ‘whois query’ is executed to get details on the owner.
 If the whois record contains the company name, address or a keyword, the FQDN is as hit.

                                                       1. Take                  2. Query
                                                       keyword                   search
                                                       from list                engines
         Example: If you focus on a certain
         market (e.g. Europe) you can exclude
         all other results based on the
         ‘CountryCode’ received via whois.
                                                                                      3. Check
                                                    5.
                                                                                       results
                                                Additional
                                                                                      and calc.
                                                  steps
                                                                                        score

                                                                                             Note: Using the FQDN as identifier,
                                                                   4. Perform
                                                                      whois                  already removes duplicates and
                                                                      query                  speeds up the process.

 FQDN                  IP               URL                                                Title                       Score       Hit

 www.a.com             1.2.3.4          http://www.a.com/about                             Company A                   10          N
                                                                                                                       (9 + 1)
                                        http://www.a.com/product1                          View product1

Fabian Mihailowitsch                                                       28th Chaos Communication Congress, 12/28/11 - Berlin
Second step: Perform Search

Implementation Notes
 Search component was developed in Perl and is not more than 503 LOC :)
 Google and Bing are used as search engines.
 Results are stored in memory and wrote to disk in regular intervals.


Querying NIC and RIPE databases

Depending on the database, the results do have
different layouts. Consider this when parsing the
‘whois queries’.

If you query too fast, you’re getting blocked.
Hence you have to rotate the queried systems
or throttle the querying speed.

As the search results can easily consist of multiple
thousands of systems, consider that a small delay
per query will delay the whole process by days.




Fabian Mihailowitsch                                   28th Chaos Communication Congress, 12/28/11 - Berlin
Second step: Perform Search

Using search engines
For Perl there are many modules for different search engines. However most of them don’t work.
But ‘curl’ and some regular expressions are fine to implement it yourself.
Google covers pretty much, however Bing leads to some additional results.


Google                                                      Bing
 Google Custom Search API can be used                       Bing API is available
 1.000 queries / day are free                               An AppID is needed that can be requested
 If you want to use the service efficient, you             for free at Bing Developer Center
need to buy a search key, which is charged on                Service is free of charge, however the queries
the amount of search queries                                have to be throttled
 For a ‘normal’ search this should result in ~200$          Consider different markets (!) as they will lead
 Consider different languages (!) as they will             to diverse results and disable filter (!)
lead to diverse results and disable filter (!)


 Google:
 https://www.googleapis.com/customsearch/v1?q=[qry]&num=10&start=[pointer]&lr=[lang]&key=[...]&cx=[cx]&filter=0

 Bing:
 http://api.search.live.net/xml.aspx?Appid=[appid]&query=[qry]&sources=web&adult=off&market=[market]&web.count=5
 0&web.offset=[pointer]

Fabian Mihailowitsch                                        28th Chaos Communication Congress, 12/28/11 - Berlin
Second step: Crawl

Idea
 Search queries lead to a scored list of potential targets.
 The higher the score, the more probable the web page is connected to the company in focus.
 Company web pages probably link to new company web pages.
 Web pages with a high score (e.g. 200) should be crawled and checked for external links.


Crawling top scored web pages

Select certain systems from the list (see third step) and feed them to a
crawler.

The crawler then identifies new external links on these web pages
and hence receives new web applications that might be of interest.
These can be appended to the results list.

Furthermore the crawler can already perform the first step of the
vulnerability scan.




Fabian Mihailowitsch                                     28th Chaos Communication Congress, 12/28/11 - Berlin
Second step: Crawl

Credits
The vulnerability management
framework (pita) mentioned during
the talk hasn’t been developed by me.

It is the effort of multiple years of
development by various security
consultants working at diverse
companies and as freelancer!

For ‘Spider –Pig’ just some
modifications were made to base
components.



Crawler
 Written in Perl
 Identifies external links and adds
them to the results list
 Performs first step of VA

Fabian Mihailowitsch                    28th Chaos Communication Congress, 12/28/11 - Berlin
Third step: Calculate Results

Idea
 Now a scored list with all identified targets is available.
 If no filter is applied, the list contains hundreds thousands of results and cannot be checked manually.
 Results with a low score (e.g. <10) are probably not relevant.
 Results with a very high score are probably irrelevant, as they refer to Online Shops, Portals, etc.
 Hence just a small portion of the results has to be analyzed manually.

                          Score




                                  Interesting Results



                                                                                      Web page




Fabian Mihailowitsch                                     28th Chaos Communication Congress, 12/28/11 - Berlin
Fourth step: Exploit

Idea
 Now real targets have been identified that somehow are connected to the company in focus.
 Information regarding the content, forms, etc. have been saved in the database during the crawling.
 This basis can be used for an automated vulnerability scan.


pita
Note (again): The vulnerability management framework (pita) mentioned during the talk hasn’t been
developed by me. It is the effort of multiple years of development by various security consultants.

Based on the identified web applications and the data collected during the crawling (e.g. forms that can be
submitted), the vulnerability management framework ‘pita’ is started.

‘pita’ runs:
 Fuzzer (XSS, SQL Injection, …)
 Web Application Fingerprinter (wafp)
 Cookie/Session analyzer
 SSL tests
 3rd party tools (sqlmap, nmap, Nikto, …)


     At the end we get a report with all vulnerabilities, out of keywords… 
Fabian Mihailowitsch                                    28th Chaos Communication Congress, 12/28/11 - Berlin
Statistics

In case we would run ‘Spider-Pig’ against a big known company, we might
get the following results:
Keywords used             287
Duration (including VA)   7 days – it could be optimized :)
Costs                     ~200 Dollar - due to Google, without Google 0 Dollar
Unique FQDNs              150.871
FQDNs with filter         50.784 – after a filter for EU countries was applied
Applications reviewed     300
Confirmed applications    223
                           Marketing campaigns, customer service, official web pages, …
                           Applications were hosted on external and internal systems
                           Official web pages within top 10! It works! :)
Vulnerabilities           




Fabian Mihailowitsch                          28th Chaos Communication Congress, 12/28/11 - Berlin
Conclusion

Classical network reconnaissance doesn’t take all aspects into account and
hence does not identify all potential targets of a company in focus.



Development of ‘Spider-Pig’, a tool to model business connections.
 ‘Spider-Pig’ allows to identify web applications linked to a company due to their business connections.
 Based on a list of keywords – without any technical details – ‘Spider-Pig’ attacks companies.
 Besides assembling a list of keywords and a manual review of results, no human interaction is needed.
 ‘Spider-Pig’ collects some interesting statistics/information and offers new perspectives.




Companies should not only focus on protecting their own systems.
 An attacker will (most likely) take the easiest way.
 Companies should not only protect their own systems but choose service providers with caution.
 Sensitive information should not be spread across multiple systems and business partners.




Fabian Mihailowitsch                                    28th Chaos Communication Congress, 12/28/11 - Berlin
Thanks! Questions?
Contact me ‘fmihailowitsch [at] deloitte.de’




Credits
Richard Sammet, who developed the idea & tool with me!




Fabian Mihailowitsch                 28th Chaos Communication Congress, 12/28/11 - Berlin

More Related Content

PPTX
Write More to Learn More
PDF
Slide Campionario carte materiali lavorazioni di Sprint24.com
PPT
Who is Ascend Integrated Media?
PDF
10 Tips for Better eNewsletter Results
PPTX
Nsdc zen and the art of ppt short presentation
PDF
Improve your nonprofit_communications_programs
PPT
Example
PPTX
My project slide show
Write More to Learn More
Slide Campionario carte materiali lavorazioni di Sprint24.com
Who is Ascend Integrated Media?
10 Tips for Better eNewsletter Results
Nsdc zen and the art of ppt short presentation
Improve your nonprofit_communications_programs
Example
My project slide show

Viewers also liked (11)

PDF
Vocab Lab/SNAP Lesson
PPT
Generating Revenue Through Digital and Mobile Content
PDF
Slide company profile firotek
PDF
Write More to Learn More Handout
PPS
04 prim. pedra dispensari 13-02-10
PDF
Mendeley bàsic avançat.2016
PPT
專題介紹
PPTX
TechYES project 1 final copy
PPT
擴充實境(Augmented reality)
PPTX
Common Formative Assessments Make a Difference
PDF
Detecting hardware keyloggers
Vocab Lab/SNAP Lesson
Generating Revenue Through Digital and Mobile Content
Slide company profile firotek
Write More to Learn More Handout
04 prim. pedra dispensari 13-02-10
Mendeley bàsic avançat.2016
專題介紹
TechYES project 1 final copy
擴充實境(Augmented reality)
Common Formative Assessments Make a Difference
Detecting hardware keyloggers
Ad

Similar to Don't scan, just ask (20)

PDF
Digital Measurement - a Determinant in Tracking and Measuring Marketing Perfo...
PDF
Digital Measurement
PDF
Emakina Academy - 5 - Know your audience - Web Analytics
PDF
Non techie journey in social internet age noiselessinnovation
PPTX
Splunk All the Things: Our First 3 Months Monitoring Web Service APIs - Splun...
PDF
Meaure Marketing Online - IABC Ottawa
PDF
"Search, APIs,Capability Management and the Sensis Journey"
PDF
Website Performance: server- and clientside techniques
PDF
Digital Measurement - How to Evaluate, Track and Measure Marketing Performance
PDF
Actions speak louder than words: Analyzing large-scale query logs to improve ...
PDF
SEO for Developers
PDF
Tom Critchlow - Data Feed SEO & Advanced Site Architecture
PDF
How Search 2.0 Has Been Redefined by Enterprise 2.0
PPTX
[Pubcon 2011] Supercharge Your eCommerce Site Search
PDF
GoodRelations Tutorial Part 1
PDF
ISWC GoodRelations Tutorial Part 1
PDF
Big Data and Competitive Intelligence
PPT
Stankiewicz Bill Social Media Mktg09 6 2 09
PPTX
The On-page of SEO for Ecommerce - Adam Audette - SearchFest 2013
PDF
Search + Big Data: It's (still) All About the User- Grant Ingersoll
Digital Measurement - a Determinant in Tracking and Measuring Marketing Perfo...
Digital Measurement
Emakina Academy - 5 - Know your audience - Web Analytics
Non techie journey in social internet age noiselessinnovation
Splunk All the Things: Our First 3 Months Monitoring Web Service APIs - Splun...
Meaure Marketing Online - IABC Ottawa
"Search, APIs,Capability Management and the Sensis Journey"
Website Performance: server- and clientside techniques
Digital Measurement - How to Evaluate, Track and Measure Marketing Performance
Actions speak louder than words: Analyzing large-scale query logs to improve ...
SEO for Developers
Tom Critchlow - Data Feed SEO & Advanced Site Architecture
How Search 2.0 Has Been Redefined by Enterprise 2.0
[Pubcon 2011] Supercharge Your eCommerce Site Search
GoodRelations Tutorial Part 1
ISWC GoodRelations Tutorial Part 1
Big Data and Competitive Intelligence
Stankiewicz Bill Social Media Mktg09 6 2 09
The On-page of SEO for Ecommerce - Adam Audette - SearchFest 2013
Search + Big Data: It's (still) All About the User- Grant Ingersoll
Ad

Recently uploaded (20)

PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PPTX
Benefits of Physical activity for teenagers.pptx
PPT
Geologic Time for studying geology for geologist
PDF
Abstractive summarization using multilingual text-to-text transfer transforme...
PDF
CloudStack 4.21: First Look Webinar slides
PDF
A review of recent deep learning applications in wood surface defect identifi...
PDF
The influence of sentiment analysis in enhancing early warning system model f...
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPT
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
PDF
sustainability-14-14877-v2.pddhzftheheeeee
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Architecture types and enterprise applications.pdf
PPTX
Custom Battery Pack Design Considerations for Performance and Safety
PPTX
Configure Apache Mutual Authentication
PPT
What is a Computer? Input Devices /output devices
PPTX
Microsoft Excel 365/2024 Beginner's training
PPTX
Modernising the Digital Integration Hub
PPTX
The various Industrial Revolutions .pptx
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
1 - Historical Antecedents, Social Consideration.pdf
Final SEM Unit 1 for mit wpu at pune .pptx
Benefits of Physical activity for teenagers.pptx
Geologic Time for studying geology for geologist
Abstractive summarization using multilingual text-to-text transfer transforme...
CloudStack 4.21: First Look Webinar slides
A review of recent deep learning applications in wood surface defect identifi...
The influence of sentiment analysis in enhancing early warning system model f...
Enhancing emotion recognition model for a student engagement use case through...
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
sustainability-14-14877-v2.pddhzftheheeeee
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Architecture types and enterprise applications.pdf
Custom Battery Pack Design Considerations for Performance and Safety
Configure Apache Mutual Authentication
What is a Computer? Input Devices /output devices
Microsoft Excel 365/2024 Beginner's training
Modernising the Digital Integration Hub
The various Industrial Revolutions .pptx
Module 1.ppt Iot fundamentals and Architecture
1 - Historical Antecedents, Social Consideration.pdf

Don't scan, just ask

  • 1. Don‘t scan, just ask A new approach of identifying vulnerable web applications Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 2. Summary It’s about identifying web applications and systems… Classical network reconnaissance techniques mostly rely on technical facts, like net blocks or URLs. They don’t take the business connections into view and hence don’t identify all potential targets. Therefore ‘Spider-Pig’ was developed. A tool that searches for web applications connected to a company based on a scored keyword list and a crawler. That way web application run by satellite companies or service providers can be identified as well. TABLE OF CONTENTS  Overview on classical network reconnaissance  Restrictions and possible enhancements  Presentation of ‘Spider-Pig’ and its approach  Statistics of a real-life scenario  Conclusion Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 3. Classical network reconnaissance TECHNIQUES  Query NIC and RIPE databases  Reverse DNS  DNS Zone Transfer  Research in forums, social networks, …  Google hacking  Identify routing and hence new systems  Brute-Force DNS … TOOLS  whois, dig, web browser, Maltego, … Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 4. How Business looks like… Company A.0 THE TARGET :) SATELLITE COMPANIES  Branch office  Partner firm Company A.1 SERVICE PROVIDER Company B  Payment Provider  Separate Webhoster  Marketing Campaign  Service Center … Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 5. …what we see with network reconnaissance Company A.0 ALL (?)  Net blocks  IP addresses  URLs  DNS information SOME (!)  Net blocks  IP addresses  URLs  DNS information Company A.1 Company B Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 6. Restrictions and possible enhancements Restrictions Enhancements Scans solely on a technical level: For an attacker it doesn’t count whether he - IPs attacks company A or B: - Net blocks - Access to sensitive data - URLs - Reputational damage -… No business relations are taken into account. Take business relationships into account. Potentially incomplete view: You can only search for what you know Build a logical network between systems, (e.g. RIPE queries). independent of the owner. Probably just one company in focus. Identify (all) applications / systems linked to a company based on a business connection level, independent of provider. Not all targets are identified! Development of ‘Spider-Pig’. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 7. Approach of Spider-Pig Spider-Pig identifies web applications connected to a company in an automated manner and hence allows to identify potential targets (web applications and systems) for further attacks. Due to German laws the tool will not be released to public. But its methodology will be explained… 1. Model Business 2. Perform Search 3. Calculate Results 4. Exploit Model the business, Based on the Duplicates are Targets are sent to a services and keywords, different removed from the custom vulnerability relationships of the search engines a results. For each scanner that uses the company in scope. queried in an domain a score is data from the crawling automated manner. calculated and a sorted phase and attacks the Based on a scored list with the results is web sites in an keyword list with Furthermore ‘whois generated. automated manner. terms, like company information’ are name, product names, gathered for the Top results are At the end, we tax numbers. identified domains and reviewed manually and hopefully get a shell! :) IP addresses. potential targets are Note: No technical extracted. details (e.g. IPs) have Identified web sites are to be provided. crawled for links to new web pages. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 8. First step: Model Business Idea  Web applications connected to a company contain terms like company name, products or imprint.  There are imprecise terms like products names that lead to many web sites (e.g. Amazon).  There are precise and very specific terms like tax numbers or imprints that lead to company pages. Solution  Manually assemble a list that contains terms specific to the company in focus.  Assign each entry a score, how unique and specific it is. Note: Experience showed that ~200 terms are sufficient and already lead to hundreds thousands of results. Technical details (e.g. URLs or software used) can be used but don’t have to. A score from 1-100 works pretty well, but can be adopted. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 9. Second step: Perform Search Idea  Keywords are used to query different search engines (e.g. Google, Bing, Yahoo).  Results are read and a list is built, where the fully qualified domain name (FQDN) is the index.  Every time the FQDN is part of a search result, the score assigned to this search item is added.  For the IP and domain name a ‘whois query’ is executed to get details on the owner.  If the whois record contains the company name, address or a keyword, the FQDN is as hit. 1. Take 2. Query keyword search from list engines Example: If you focus on a certain market (e.g. Europe) you can exclude all other results based on the ‘CountryCode’ received via whois. 3. Check 5. results Additional and calc. steps score Note: Using the FQDN as identifier, 4. Perform whois already removes duplicates and query speeds up the process. FQDN IP URL Title Score Hit www.a.com 1.2.3.4 http://www.a.com/about Company A 10 N (9 + 1) http://www.a.com/product1 View product1 Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 10. Second step: Perform Search Implementation Notes  Search component was developed in Perl and is not more than 503 LOC :)  Google and Bing are used as search engines.  Results are stored in memory and wrote to disk in regular intervals. Querying NIC and RIPE databases Depending on the database, the results do have different layouts. Consider this when parsing the ‘whois queries’. If you query too fast, you’re getting blocked. Hence you have to rotate the queried systems or throttle the querying speed. As the search results can easily consist of multiple thousands of systems, consider that a small delay per query will delay the whole process by days. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 11. Second step: Perform Search Using search engines For Perl there are many modules for different search engines. However most of them don’t work. But ‘curl’ and some regular expressions are fine to implement it yourself. Google covers pretty much, however Bing leads to some additional results. Google Bing  Google Custom Search API can be used  Bing API is available  1.000 queries / day are free  An AppID is needed that can be requested  If you want to use the service efficient, you for free at Bing Developer Center need to buy a search key, which is charged on  Service is free of charge, however the queries the amount of search queries have to be throttled  For a ‘normal’ search this should result in ~200$  Consider different markets (!) as they will lead  Consider different languages (!) as they will to diverse results and disable filter (!) lead to diverse results and disable filter (!) Google: https://www.googleapis.com/customsearch/v1?q=[qry]&num=10&start=[pointer]&lr=[lang]&key=[...]&cx=[cx]&filter=0 Bing: http://api.search.live.net/xml.aspx?Appid=[appid]&query=[qry]&sources=web&adult=off&market=[market]&web.count=5 0&web.offset=[pointer] Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 12. Second step: Crawl Idea  Search queries lead to a scored list of potential targets.  The higher the score, the more probable the web page is connected to the company in focus.  Company web pages probably link to new company web pages.  Web pages with a high score (e.g. 200) should be crawled and checked for external links. Crawling top scored web pages Select certain systems from the list (see third step) and feed them to a crawler. The crawler then identifies new external links on these web pages and hence receives new web applications that might be of interest. These can be appended to the results list. Furthermore the crawler can already perform the first step of the vulnerability scan. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 13. Second step: Crawl Credits The vulnerability management framework (pita) mentioned during the talk hasn’t been developed by me. It is the effort of multiple years of development by various security consultants working at diverse companies and as freelancer! For ‘Spider –Pig’ just some modifications were made to base components. Crawler  Written in Perl  Identifies external links and adds them to the results list  Performs first step of VA Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 14. Third step: Calculate Results Idea  Now a scored list with all identified targets is available.  If no filter is applied, the list contains hundreds thousands of results and cannot be checked manually.  Results with a low score (e.g. <10) are probably not relevant.  Results with a very high score are probably irrelevant, as they refer to Online Shops, Portals, etc.  Hence just a small portion of the results has to be analyzed manually. Score Interesting Results Web page Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 15. Fourth step: Exploit Idea  Now real targets have been identified that somehow are connected to the company in focus.  Information regarding the content, forms, etc. have been saved in the database during the crawling.  This basis can be used for an automated vulnerability scan. pita Note (again): The vulnerability management framework (pita) mentioned during the talk hasn’t been developed by me. It is the effort of multiple years of development by various security consultants. Based on the identified web applications and the data collected during the crawling (e.g. forms that can be submitted), the vulnerability management framework ‘pita’ is started. ‘pita’ runs:  Fuzzer (XSS, SQL Injection, …)  Web Application Fingerprinter (wafp)  Cookie/Session analyzer  SSL tests  3rd party tools (sqlmap, nmap, Nikto, …) At the end we get a report with all vulnerabilities, out of keywords…  Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 16. Statistics In case we would run ‘Spider-Pig’ against a big known company, we might get the following results: Keywords used 287 Duration (including VA) 7 days – it could be optimized :) Costs ~200 Dollar - due to Google, without Google 0 Dollar Unique FQDNs 150.871 FQDNs with filter 50.784 – after a filter for EU countries was applied Applications reviewed 300 Confirmed applications 223  Marketing campaigns, customer service, official web pages, …  Applications were hosted on external and internal systems  Official web pages within top 10! It works! :) Vulnerabilities  Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 17. Conclusion Classical network reconnaissance doesn’t take all aspects into account and hence does not identify all potential targets of a company in focus. Development of ‘Spider-Pig’, a tool to model business connections.  ‘Spider-Pig’ allows to identify web applications linked to a company due to their business connections.  Based on a list of keywords – without any technical details – ‘Spider-Pig’ attacks companies.  Besides assembling a list of keywords and a manual review of results, no human interaction is needed.  ‘Spider-Pig’ collects some interesting statistics/information and offers new perspectives. Companies should not only focus on protecting their own systems.  An attacker will (most likely) take the easiest way.  Companies should not only protect their own systems but choose service providers with caution.  Sensitive information should not be spread across multiple systems and business partners. Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin
  • 18. Thanks! Questions? Contact me ‘fmihailowitsch [at] deloitte.de’ Credits Richard Sammet, who developed the idea & tool with me! Fabian Mihailowitsch 28th Chaos Communication Congress, 12/28/11 - Berlin