All notable changes to this project will be documented in this file. Dates are displayed in UTC.
- reports: changed file name composition from report.mydomain.com.* to mydomain.com.report.*
#9
- crawler: solved edge-case, which very rarely occurred when the queue processing was already finished, but the last outstanding coroutine still found some new URL
a85990d
- javascript processor: improvement of webpack JS processing in order to correctly replace paths from VueJS during offline export (as e.g. in case of docs.netlify.com) .. without this, HTML had the correct paths in the left menu, but JS immediately broke them because they started with an absolute path with a slash at the beginning
9bea99b
- offline export: detect and process fonts.googleapis.com/css* as CSS even if there is no .css extension
da33100
- js processor: removed the forgotten var_dump
5f2c36d
- offline export: improved search for external JS in the case of webpack (dynamic composition of URLs from an object with the definition of chunks) - it was debugged on docs.netlify.com
a61e72e
- offline export: in case the URL ends with a dot and a number (so it looks like an extension), we must not recognize it as an extension in some cases
c382d95
- offline url converter: better support for SVG in case the URL does not contain an extension at all, but has e.g. 'icon' in the URL (it's not perfect)
c9c01a6
- offline exporter: warning instead of exception for some edge-cases, e.g. not saving SVG without an extension does not cause the export to stop
9d285f4
- cors: do not set Origin request header for images (otherwise error 403 on cdn.sanity.io for svg, etc.)
2f3b7eb
- best practice analyzer: in checking for missing quotes ignore values longer than 1000 characters (fixes, e.g., at skoda-auto.cz the error Compilation failed: regular expression is too large at offset 90936)
8a009df
- html report: added loading of extra headers to the visited URL list in the HTML report
781cf17
- Frontload the report names
62d2aae
- robots.txt: added option --ignore-robots-txt (we often need to view internal or preview domains that are otherwise prohibited from indexing by search engines)
9017c45
- http client: adden an explicit 'Connection: close' header and explicitly calling $client->close(), even though Swoole was doing it automatically after exiting the coroutine
86a7346
- javascript processor: parse url addresses to import the JS module only in JS files (otherwise imports from HTML documentation, e.g. on the websites svelte.dev or nextjs.org, were parsed by mistake)
592b618
- html processor: added obtaining urls from HTML attributes that are not wrapped in quotes (but I am aware that current regexps can cause problems in the cases when are used spaces, which are not properly escaped)
f00abab
- offline url converter: swapping woff2/woff order for regex because in this case their priority is important and because of that woff2 didn't work properly
3f318d1
- non-200 url basename detection: we no longer consider e.g. image generators that have the same basename and the url to the image in the query parameters as the same basename
bc15ef1
- supertable: activation of automatic creation of active links also for homepage '/'
c2e228e
- analysis and robots.txt: improving the display of url addresses for SEO analysis in the case of a multi-domain website, so that it cannot happen that the same url, e.g. '/', is in the overview multiple times without recognizing the domain or scheme + improving the work with robots.txt in SEO detection and displaying urls banned for indexing
47c7602
- offline website exporter: we add the suffix '_' to the folder name only in the case of a typical extension of a static file - we don't want this to happen with domain names as well
d16722a
- javascript processor: extract JS urls also from imports like import {xy} from "./path/foo.js"
aec6cab
- visited url: added 'txt' extension to looksLikeStaticFileByUrl()
460c645
- html processor: extract JS urls also from <link href="*.js">, typically with rel="modulepreload"
c4a92be
- html processor: extracting repeated calls to getFullUrl() into a variable
a5e1306
- analysis: do not include urls that failed to load (timeout, skipping, etc.) in the analysis of content-types and source-domains - prevention of displaying content type 'unknown'
b21ecfb
- cli options: improved method of removing quotes even for options that can be arrays - also fixes --extra-columns='Title'
97f2761
- url skipping: if there are a lot of URLs with the same basename (ending after the last slash), we will allow a maximum of 5 requests for URLs with the same basename - the purpose is to prevent a lot of 404 from being triggered when there is an incorrect relative link to relative/my-img.jpg on all pages (e.g. on 404 page on v2.svelte.dev)
4fbb917
- analysis: perform most of the analysis only on URLs from domains for which we have crawling enabled
313adde
- audio & video: added audio/video file search in <audio> and <video> tags, if file crawling is not disabled
d72a5a5
- base practices: retexting stupid warning like '<h2> after <h0>' to '<h2> without previous heading
041b383
- initial url redirect: in the case thats is entered url that redirects to another url/domain within the same 2nd-level domain (typically http->https or mydomain.tld -> www.mydomain.tld redirects), we continue crawling with new url/domain and declare a new url as initial url
166e617
22 December 2023
- version 1.0.7.20231222 + changelog
9d2be52
- html report template: updated logo link to crawler.siteone.io
9892cfe
- http headers analysis: renamed 'Headers' to 'HTTP headers'
436e6ea
- sitemap generator: added info about crawler to generated sitemap.xml
7cb7005
- html report: refactor of all inline on* event listeners to data attributes and event listeners added from static JS inside <script>, so that we can disable all inline JS in the online HTML report and allow only our JS signed with hashes by Content-Security-Policy
b576eef
- readme: removed HTTP auth from roadmap (it's already done), improved guide how to implement own upload endpoint and message about SMTP moved under mailer options
e1567ae
- utils: hide passwords/authentication specified in cli parameters as *auth=xyz (e.g. --http-auth=abc:xyz)" in html report
c8bb88f
- readme: fixed formatting of the upload and expert options
2d14bd5
- readme: added Upload Options
d8352c5
- upload exporter: added possibility via --upload to upload HTML report to offline URL, by default crawler.siteone.io/html/*
2a027c3
- parsed-url: fixed warning in the case of url without host
284e844
- seo and opengraph: fixed false positives 'DENY (robots.txt)' in some cases
658b649
- best practices and inline-svgs: detection and display of the entire icon set in the HTML report in the case of <svg> with more <symbol> or <g>
3b2772c
- sitemap generator: sort urls primary by number of dashes and secondary alphabetically (thanks to this, urls of the main levels will be at the beginning)
bbc47e6
- sitemap generator: only include URLs from the same domain as the initial URL
9969254
- changelog: updated by 'composer changelog'
0c67fd4
- package.json: used by auto-changelog generator
6ad8789
8 December 2023
- readme: removed bold links from the intro (it didn't look as good on github as it did in the IDE)
b675873
- readme: improved intro and gif animation with the real output
fd9e2d6
- http auth: for security reasons, we only send auth data to the same 2nd level domain (and possibly subdomains). With HTTP basic auth, the name and password are only base64 encoded and we would send them to foreign domains (which are referred to from the crawled website)
4bc8a7f
- html report: increased specificity of the .header class for the header, because this class were also used by the generic class at <td class='header'> in security tab
9d270e8
- html report: improved readability of badge colors in light mode
76c5680
- crawler: moving the decrement of active workers after parsing URLs from the content, where further filling of the queue could occur (for this reason, queue processing could sometimes get stuck in the final stages)
f8f82ab
- analysis: do not parse/check empty HTML (it produced unnecessary warning) - it is valid to have content-type: text/html but with connect-lengt: 0 (for example case for 'gtm.js?id=')
436d81b
3 December 2023
- changelog: updated changelog after 3 added commits to still untagged draft release 1.0.5
f42fe18
- utils tests: fixed tests of methods getAbsolutePath() and getOutputFormattedPath()
d4f4576
- crawler.php: replaced preg_match to str_contains
5b28952
- version: 1.0.5.20231204 + changelog
7f2e974
- option: replace placeholders like a '%domain' also in validateValue() method because there is also check if path is writable with attempt to mkdir
329143f
- swoole in cygwin: improved getBaseDir() to work better even with the version of Swoole that does not have SCRIPT_DIR
94cc5af
- html processor: it must also process the page with the redirect, because is needed to replace the URL in the meta redirect tag
9ce0eee
- sitemap: use formatted output path (primary for better output in Cygwin environment with needed C:/foo <-> /cygwin/c/foo conversion)
6297a7f
- file exporter: use formatted output path (primary for better output in Cygwin environment with needed C:/foo <-> /cygwin/c/foo conversion)
426cfb2
- options: in the case of dir/file validation, we want to work with absolute paths for more precise error messages
6df228b
- crawler.php: improved baseDir detection - we want to work with absolute path in all scenarios
9d1b2ce
- utils: improved getAbsolutePath() for cygwin and added getOutputFormattedPath() with reverse logic for cygwin (C:/foo/bar <-> /cygdrive/c/foo/bar)
161cfc5
- offline export: renamed --offline-export-directory to --offline-export-dir for consistency with --http-cache-dir or --result-storage-dir
26ef45d
30 November 2023
- dom parsing: handling warnings in case of impossibility to parse some DOM elements correctly, fixes #3
#3
- version: 1.0.4.20231201 + changelog
8e15781
- options: ignore empty values in the case of directives with the possibility of repeated definition
5e30c2f
- http-cache: now the http cache is turned off using the 'off' value (it's more understandable)
9508409
- core options: added --console-width to enforce the definition of the console width and disable automatic detection via 'tput cols' on macOS/Linux or 'mode con' on Windows (used by Electron GUI)
8cf44b0
- gui support: added base-dir detection for Windows where the GUI crawler runs in Cygwin
5ce893a
- renaming: renamed 'siteone-website-crawler' to 'siteone-crawler' and 'SiteOne Website Crawler' to 'SiteOne Crawler'
64ddde4
- utils: fixed color-support detection
62dbac0
- core options: added --force-color options to bypass tty detection (used by Electron GUI)
607b4ad
- best practice analysis: in the case of checking an image (e.g. for the existence of WebP/AVIF), we also want to check external images, because very often websites have images linked from external domains or services for image modification or optimization
6100187
- html report: set scaleDown as default object-fit for image gallery
91cd300
- offline exporter: added short -oed as alias to --offline-export-directory
22368d9
- image gallery: list of all images on the website (except those from the srcset, where there would be duplicates only in other sizes or formats), including SVG with rich filtering options (through image format, size and source tag/attribute) and the option of choosing small/medium/view and scale-down/contains/cover for object-fit css property
43de0af
- core options: added a shortened version of the command name consisting of only one hyphen and the first letters of the words of the full command (e.g. --memory-limit has short version -ml), added getInitialScheme()
eb9a3cc
- visited url: added 'sourceAttr' with information about where the given URL was found and useful helper methods
6de4e39
- found urls: in the case of the occurrence of one URL in several places/attributes, we consider the first one to be the main one (typically the same URL in src and then also in srcset)
660bb2b
- url parsing: added more recognition of which attributes the given URL address was parsed from (we need to recognize src and srcset for ImageGallery in particular)
802c3c6
- supertable and urls: in removing the redundant hostname for a more compact URL output, we also take into account the scheme http:// or https:// of initial URL (otherwise somewhere it lookedlike duplicate) + prevention of ansi-color definitions for bash in the HTML output
915469e
- title/description/keywords parsing: added html entities decoding because some website uses decoded entities with í – etc
920523d
- crawler: added 'sourceAttr' to the swoole table queue and already visited URLs (we will use it in the Image Gallery for filtering, so as not to display unnecessarily and a lot of duplicate images only in other resolutions from the srcsets)
0345abc
- url parameter: it is already possible not to enter the scheme and https:// or http:// will be added automatically (http:// for e.g. for localhost)
85e14e9
- disabled images: in the case of a request to remove the images, replace their body with a 1x1px transparent gif and place a semi-transparent hatch with the crawler logo and opacity as a background
c1418c3
- url regex filtering: added option , which will allow you to limit the list of crawled pages according to the declared regexps, but at the same time it will allow you to crawl and download assets (js, css, images, fonts, documents, etc.) from any URL (but with respect to allowed domains)
21e67e5
- img srcset parsing: because a valid URL can also contain a comma (and various dynamic parametric img generators use them) and in the srcset a comma+whitespace should be used to separate multiple values, this is also reflected in the srcset parsing
0db578b
- websocket server: added option to set --websocket-server, which starts a parallel process with the websocket server, through which the crawler sends various information about the progress of crawling (this will also be used by Electron UI applications)
649132f
- http client: handle scenario when content loaded from cache is not valid (is_bool)
1ddd099
- HTML report: updated logo with final look
2a3bb42
- mailer: shortening and simplifying email content
e797107
- robots.txt: added info about loaded robots.txt to summary (limited to 10 domains for case of huge multi domain crawling)
00f9365
- redirects analyzer: handled edge case with empty url
e9be1e3
- text output: added fancy banner with crawler logo (thanks to great SiteOne designers!) and smooth effect
e011c35
- content processors: added applyContentChangesBeforeUrlParsing() and better NextJS chunks handling
e5c404f
- url searches: added ignoring data:, mailto:, tel:, file:// and other non-requestable resources also to FoundUrls
5349be2
- crawler: added declare(strict_types=1) and banner
27134d2
- heading structure analysis: highlighting and calculating errors for duplicate <h1> + added help cursor with a hint
f5c7db6
- core options: added --help and --version, colorized help
6f1ada1
- ./crawler binary - send output of cd - to /dev/null and hide unwanted printed script path
16fe79d
- README: updated paths in the documentation - it is now possible to use the ERROR: Option --url () must be valid URL
86abd99
- options: --workers default for Cygwin runtime is now 1 (instead of 3), because Cygwin runtime is highly unstable when workers > 1
f484960
10 November 2023
- version: 1.0.3.20231110 + changelog
5b80965
- cache/storage: better race-condition handling in a situation where several coroutines could write the same folder at one time, then mkdir reported 'File exists'
be543dc
10 November 2023
- version: 1.0.2.20231110 + changelog
230b947
- html report: added aria labels to active/important elements
a329b9d
- version: 1.0.1.20231109 - changelog
50dc69c
9 November 2023
- version: 1.0.1.20231109
e213cb3
- offline exporter: fixed case when on https:// website is link to same path but with http:// protocol (it overrided proper *.html file just with meta redirect .. real case from nextjs.org)
4a1be0b
- html processor: force to remove all anchor listeners when NextJS is detected (it is very hard to achive a working NextJS with offline file:// protocol)
2b1d935
- file exporters: now by default crawler generates a html/json/txt report to 'tmp/[report|output].%domain%.%datetime%.[html|json|txt]' .. i assume that most people will want to save/see them
7831c6b
- security analysis: removed multi-line console output for recommendations .. it was ugly
310af30
- json output: added JSON_UNESCAPED_UNICODE for unescaped unicode chars (e.g. czech chars will be readable)
cf1de9f
- mailer: do not send e-mails in case of interruption of the crawler using ctrl+c
19c94aa
- refactoring: manager stats logic extracted into ManagerStats and implemented also into manager of content processors + stats added into 'Crawler stats' tab in HTML report
3754200
- refactoring: content related logic extracted to content processors based on ContentProcessor interface with methods findUrls():?FoundUrls, applyContentChangesForOfflineVersion():void and isContentTypeRelevant():bool + better division of web framework related logic (NextJS, Astro, Svelte, ...) + better URL handling and maximized usage of ParsedUrl
6d9f25c
- phpstan: ignore BASE_DIR warning
6e0370a
- offline website exporter: improved export of a website based on NextJS, but it's not perfect, because latest NextJS version do not have some JS/CSS path in code, but they are generated dynamicly from arrays/objects
c4993ef
- seo analyzer: fixed trim() warning when no <h1> found
f0c526f
- offline export: a lot of improvements when generating the offline version of the website on NextJS - chunk detection from the manifest, replacing paths, etc.
98c2e15
- seo and og: fixed division by zero when no og/twitter tags found
19e4259
- console output: lots of improvements for nice, consistent and minimal word-wrap output
596a5dc
- basic file/dir structure: created ./crawler (for Linux/macOS) and ./crawler.bat for Windows, init script moved to ./src, small related changes about file/dir path building
5ce41ee
- header status: ignore too dynamic Content-Disposition header
4e0c6fd
- offline website exporter: added .html extensions to typical dynamic language extensions, because without it the browser will show them as source code
7130b9e
- html report: show tables with details, even if they are without data (it is good to know that the checks were carried out, but nothing was found)
da019e4
- tests: repaired tests after last changes of file/url building for offline website .. merlot is great!
7c77c41
- utils: be more precise and do not replace attributes in SVG .. creative designers will not love you when looking at the broken SVG in HTML report
3fc81bb
- utils: be more precise in parsing phone numbers, otherwise people will 'love' you because of false positives .. wine is still great
51fd574
- html parser: better support for formatted html with tags/attributes on multiple lines
89a36d2
- utils: don't be hungry in stripJavaScript() because you ate half of my html :) wine is already in my head...
0e00957
- file result storage: changed cache directory structure for consistency with http client's cache, so it looks like my.domain.tld-443/04/046ec07c.cache
26bf428
- http client cache: for better consistency with result storage cache, directory structure now contains also port, so it looks like my.domain.tld-443/b9/b989bdcf2b9389cf0c8e5edb435adc05.cache
a0b2e09
- http client cache: improved directory structure for large scale and better orientation for partial cache deleting.. current structure in tmp dir: my.domain.tld/b9/b989bdcf2b9389cf0c8e5edb435adc05.cache
10e02c1
- offline website exporter: better srcset handling - urls can be defined with or without sizes
473c1ad
- html report: blue color for search term, looks better
cb47df9
- offline website exporter: handled situation of the same-name folder/file when both the folder /foo/next.js/ and the file /foo/next.js existed on the website (real case from vercel.com)
7c27d2c
- exporters: added exec times to summary messages
41c8873
- crawler: use port from URL if defined or by scheme .. previous solution didn't work properly for localhost:port and parsed URLs to external websites
324ba04
- heading analysis: changed sorting to DESC by errors, renamed Headings structure -> Heading structure
dbc1a38
- security analysis: detection and ignoring of URLs that point to a non-existent static file but return 404 HTML, better description
193fb7d
- super table: added escapeOutputHtml property to column for better escape managing + updated related supertables
bfb901c
- headings analysis: replace usage of DOMNode->textContent because when the headings contain other tags, including <script>, textContent also contains JS code, but without the <script> tag
5c426c2
- best practices: better missing quotes detection and minimizing false positives in special cases (HTML/JS in attributes, etc.)
b03a534
- best practices: better SVG detection and minimizing false positives (e.g. code snippets with SVG), improved look in HTML report and better descriptions
c35f7e2
- headers analysis: added [ignored generic values] or [see values below] for specific headers
a7b444d
- core options: changed --hide-scheme-and-host to --show-scheme-and-host (by default is hidden schema+host better)
3c202e9
- truncating: replaced '...' with '…'
870cf8c
- accessibility analyzer: better descriptions
514b471
- crawler & http client: if the response is loaded from the cache, we do not wait due to rate limiting - very useful for repeated executions
61fbfab
- header stats: added missing strval in values preview
9e11030
- content type analyzer: increased column width for MIME type from 20 to 26 (enough for application/octet-stream)
c806674
- SSL/TLS analyzer: fixed issues on Windows with Cygwin where nslookup does not work reliably
714b9e1
- text output: removed redundant whitespaces from banner after .YYYYMMDD was added to the version number
8b76205
- readme: added link to #ready-to-use-releases to summary
574b39e
- readme: added section Ready-to-use releases
44d686b
- changelog: added changelog by https://github.com/cookpete/auto-changelog/tree/master + added 'composer changelog'
d11af7e
7 November 2023
- proxy: added support for --proxy=<host:port>, closes #1
#1
- license: renamed to LICENSE.md
c0f8ec2
- license: added license CC 4.0 BY
bd5371b
- version: set v1.0.0.20231107
bdbf2be
- version: set v1.0.0
a98e61e
- SSL/TLS analyzer: uncolorize valid-to in summary item, phpstan fixes (non-funcional changes)
88d1d9f
- content type analyzer: added table with MIME types
b744f13
- seo analysis: added TOP10 non-unique titles and descriptions to tab SEO and OpenGraph + badges
4ae14c1
- html report: increased sidebar width to prevent wrapping in the case of higher numbers in badges
c5c8f4c
- dns analyzer: increased column size to prevent auto-truncation of dns/ip addresses
b4d4127
- html report: fixed badge with errors on DNS and SSL tab
e290403
- html report: ensure that no empty tabs will be in report (e.g. in case where all analyzers will be deactivated by --analyzer-filter-regex='/anything/')
6dd5bcc
- html report: improved replacement of non-badged cells to transparent badge for better alignment
172a074
- html report: increased visible part of long tables from 500px to 658px (based on typical sidebar height), updated title
0be355f
- utils: selected better colors for ansi->html conversion
6c2a8e3
- SSL/TLS analyzer: evaluation and hints about unsafe or recommeneded protocols, from-to validation, colorized output
5cea1fe
- SEO & OpenGraph analyzers: refactored class names, headings structure moved to own tab, other small improvements
75a9724
- security analyzer: bette vulnerabilities explanation and better output formatting
ee172cb
- summary: selected more suitable icons from the utf-8 set that work well in the console and HTML
ef67483
- header stats: addValue() can accept both string and array
a0d746b
- headers & redirects - text improvements
3ac9010
- dns analyzer: colorized output and added info about CNAME chain into summary
7dd1f8a
- best practices analyzer: added SVG sanitization to prevent XSS, fine-tuning of missing quotes detection, typos
4dc1eb5
- options: added extras option, e.g. for number range validation
760a865
- seo and socials: small type-hint and phpstan fixes
bf695be
- best practice analyzer: added found depth to messages about too deep DOM depth
220b43c
- analysis: added SSL/TLS analyzer with info about SSL certificate, its validity, supported protocols, issuer .. in the report SSL/TLS info are under tab 'DNS and TLS/SSL'
3daf175
- super table: show fulltext only for >= 10 rows + visible height of the table in HTML shorten to 500px/20 rows and show 'Show entire table' link .. implemented only with HTML+CSS, so that it also works on devices without JS (e.g. e-mail browser on iOS)
7fb9e52
- analysis: added seo & sharing analysis - meta info (title, h1, description, keywords), OG/Twitter data, heading structure details
53e12e6
- best practices: added checks for WebP and AVIF images
0ccabc6
- best practices: added brotli support reporting to tables
7ff2c53
- super table: added option to specify whether the table should be displayed on the output to the console, html or json
6bb6217
- headers analysis: analysis of HTTP headers of all requests to the main domain, their detailed breakdown, values and statistics
1fcc1db
- analysis: fixed search of attributes with missing quotes
3db31b9
- super table: added the number of found/displayed lines next to the full text
6e7f3d4
- super table: removed setting column widths for HTML table - works best without forcing widths
2a785e7
- html report: even wider content of the report is allowed, for better functioning for high-resolution displays
363990c
- pages 404: truncate too long urls
082bae6
- fixes: fixed various minor warnings related to specific content or parameters
da1802d
- options: ignore extra comma or empty value in list
3f5cab6
- super table: added useful fulltext search for all super tables
50a4edf
- colors: more light color for badge.neutral in light mode because previous was too contrasting
0dbad09
- colors: notice is now blue instead of yellow and severity order fix in some places (critical -> warning -> notice -> ok -> info)
1b50b99
- colors: changed gray color to more platform-consistent color, otherwise gray was too dark on macOS
173c9bd
- scripts: removed helper run.tests* scripts
e9f0c8f
- analysis: added table with detailed list of security findings and URLs
5b9e0fe
- analysis: added SecurityAnalyzer, which checks the existence and values of security headers and performs HTML analysis for common issues
0cb7cb9
- http auth: added support for basic HTTP authentication by --http-auth=username:password
147e004
- error handling: improved behaviour in case of entering a non-existent domain or problems with DNS resolving
5c08fb4
- html report: implemented completely redesigned html report with useful information, with light/dark mode and possibility to sort tables by clicking on the header .. design inspired by Zanrly from Shuffle.dev
05da14f
- http client: fix of extension detection in the case of very non-standard or invalid URLs
113faa5
- options: increased default memory limit from 512M to 2048M + fixed refactored 'file-system' -> 'file' in docs for result storage
1471b28
- utils: fix that date formats are not detected as a phone number in parsePhoneNumbersFromHtml()
e4e1009
- strict types: added declare(strict_types=1) to all classes with related fixes and copyright
92dd47c
- dns analyzer: added information about the DNS of the given domain - shows the entire cname/alias chain as well as the final resolved IPv4/IPv6 addresses + tests
199421d
- utils: helper function parsePhoneNumbersFromHtml() used in BestPracticeAnalyzer + tests
09cc5fb
- summary consistency: forced dots at the end of each item in the summary list
4758e38
- crawler: support for more benevolent tags for title and meta attributes .. e.g. even the title can contain other HTML attributes
770b339
- options: default timeout increased from 3 to 5 seconds .. after testing on a lot websites, it makes better sense
eb74207
- super table: added option to force non-breakable spaces in column cells
3500818
- best practice analyzer: added measurement of individual steps + added checking of active links with phone numbers <a href="tel: 123...">
1bb39e8
- accessibility analyzer: added measurement of individual steps + removed DOMDocument parsing after refactoring
2a7c49b
- analysis: added option to measure the duration and number of analysis steps + the analyzeVisitedUrl() method already accepts DOMDocument (if HTML) so the analyzers themselves do not have to do it twice
d8b9a3d
- super table: calculated auto-width can't be shorter than column name (label)
b97484f
- utils: removed ungreedy flag from all regular expressions, it caused problems under some circumstances
03fc202
- phpstan: fixed all level 5 issues
04c21aa
- phpstan: fixed all level 4 issues
91fee49
- phpstan: fixed all level 3 issues
2f7866a
- phpstan: fixed all level 2 issues
e438996
- phpstan: installed phpstan with level 2 for now
b896e6c
- tests: allowed nextjs.org for crawling (incorrectly because of this, a couple of tests did not pass)
cdc7f56
- refactor: moved /Crawler/ into /src/Crawler/ + added file attachment support to mailer
2f0d26c
- sitemap exporter: renamed addErrorToSummary -> addCriticalToSummary
e46e192
- text output: added options --show-inline-criticals and --show-inline-warning which displays the found problems directly under the URL - the displayed table will be less clear, but the problems are clearly visible
725b212
- composer.json: added require declarations for ext-dom, ext-libxml (used in analyzers) and ext-zlib (used in cache/storages)
3542cf0
- analysis: added accessibility and best practices analyzers with useful checks
860316f
- analysis: added AnalysisManager for better analysis control with the possibility to filter required analyzers using --analyzer-filter-regex
150569f
- result storage: options --result-storage, --result-storage-dir and --result-storage-compression for storage of response bodies and headers (by default is used memory storage but you can use file storage for extremely large websites)
d2a8fab
- http cache: added --http-cache-dir and --http-cache-compression parameters (by default http cache is on and set to 'tmp/http-client-cache' and compression is disabled)
2eb9ed8
- super table: the currentOrderColumn is already optional - sometimes we want to leave the table sorted according to the input array
4fba880
- analysis: replaced severity ok/warning/error with ok/notice/warning/critical - it made more sense for analyzers
18dbaa7
- analysis: added support for immediate analysis of visited URLs with the possibility to insert the analyzer's own columns into the main table
004865f
- content types: fixed json/xml detection
00fc180
- content type analyzer: decreased URLs column size from 6 to 5 - that's enough
2eefbaf
- formatting: unification of duration formatting across the entire application
412ee7a
- super table: fixed sorting for array of arrays
4829be8
- source domains analyzer: minor formatting improvements
2d32ced
- offline website exporter: added info about successful export to summary
92e7e46
- help: added red message about invalid CLI parameters also to the end of help output, because help is already too long
6942e8f
- super table: added column property 'formatterWillChangeValueLength' to handle situation with the colored text and broken padding
7371a68
- analyzers: setting a more meaningful analyzers order
5e8f747
- analyzers: added source domains analyzer with summary of domains and downloaded content types (number/size/duration)
f478f17
- super table: added auto-width column feature
d2c04de
- renaming: '--max-workers' to '--workers' with possibility to use shortcut '-w=<num>' + adding possibility to use shortcut '-rps=<num>' for '--max-reqs-per-sec=<num>'
218f8ff
- extra columns: added ability to force columns to the required length via "!" + refactoring using ExtraColumn
def82ff
- readme: divisionlit of features into several groups and divided accordingly
c03d231
- offline exporter: export of the website to the offline form has already been fine-tuned (but not perfect yet), --disable-* options to disable JS/CSS/images/fonts/etc. and a lot of other related functionalities
0d04a98
- crawler: added possibility to set speed via --max-reqs-per-sec (default 10)
d57cc4a
- tests: dividing asserts for URL conversion testing into different detailed groups
f6221cb
- html url parser: added support for loading fonts from <link href='...'>
4c482d1
- manager: remove avif/webp support if OfflineWebsiteExporter is active - we want to use only long-supported jpg/png/gif on the local offline version
3ec81d3
- http response: transformation of the redirect to html with redirection through the <meta> tag
8f6ff16
- initiator: skip comments or empty arguments
12f4c52
- http client: added crawler signature to User-Agent and X-Crawler-Info header + added possibility to set Origin request header (otherwise some servers block downloading the fonts)
ae4eaf3
- visited url: added isStaticFile()
f1cd5e8
- crawler: increased pcre.backtrack_limit and pcre.recursion_limit (100x) to support longer HTML/CSS/JS
35a6e9a
- core options: renamed --headers-to-table to --extra-columns
7c30988
- crawler: added type for audio and xml + static cache for getContentTypeIdByContentTypeHeader
386599e
- found urls: normalization of URL takes care of spaces + change of source type to int
c3063a2
- debugging: possibility to enable debugging through ParsedUrl
979dc0e
- offline url converter: class for solving the translation of URL addresses to offline/local + tests
44118e6
- url converter: TargetDomainRelation enum with tests
fd6cf21
- initiator: check only script basename in unknown args check
888448f
- offline website export: to run the exporter is necessary to set --offline-export-directory
33e9f95
- offline website export: to run the exporter is necessary to set --offline-export-directory
bcc007b
- log & tmp: added .gitkeep for versioning of these folders - they are used by some optional features
065f8ef
- offline website export & tests: added the already well-functioning option to export the entire website to offline mode working from local static HTML files, including images, fonts, styles, scripts and other files (no documentation yet) + lot of related changes in Crawler + added first test testing some important functionalities about relative URL building
4633211
- composer & phpunit: added composer, phpunit and license CC BY 4.0
4979143
- visited-url: added info if is external and if is allowed to crawl it
268a696
- text-output: added peak memory usage and average traffic bandwidth to total stats
cb68340
- crawler: added video support and fixed javascript detection by content-type
3c3eb96
- url parsers: extraction of url parsing from html/css into dedicated classes and FoundUrl with info about source tag/attribute
d87597d
- manager: ensure that done callback is executed only once
d99cccd
- http-client: extraction of http client functionality into dedicated classes and implemented cache for HTTP responses (critical for efficient development)
8439e37
- debugging: added debugging related expert options + Debugger class
2c89682
- parsed-url: added query, it is already needed
860df08
- status: trim only HTML bodies because trim break some types of binary files, e.g. avif
fca2156
- url parsers: unification of extension length in relevant regexes to {1,10}
96a3548
- basic-stats: fixed division by zero and nullable times
8c38b96
- fastest-analyzer: show only URLs with status 200 on the TOP list
0085dd1
- content-type-analyzer: added stats for 42x statuses (429 Too many requests)
4f49d12
- file export: fixed HTML report error after last refactoring
e77fa6c
- sitemap: publish only URLs with status 200 OK
b2d4448
- summary: added missing </ul> and renamed heading Stats to Summary in HTML report
c645e16
- status summary: added summary showing important analyzed metrics with OK/WARNING/CRITICAL icons, ordering by severity and INFO about the export execution + interrupting the script by CTRL+C will also run all analyzers, exporters and display all statistics for already processed URLs
fd643d0
- output consistency: ensuring color and formatting consistency of different types of values (status codes, request durations)
3ffe1d2
- analyzers: added content-type analyzer with stats for total/avg times, total sizes and statuses 200x, 300x, 400x, 500x
0475347
- crawler: better content-type handling for statistics and added 'Type' column to URL lists + refactored info from array to class
346caf4
- supertable: is now able to display from the array-of-arrays as well as from the array-of-objects + it can translate color declarations from bash to HTML colors when rendering to HTML
80f0b1c
- analyzers: TOP slowest/fastest pages analyzer now evaluates only HTML pages, otherwise static content skews the results + decreased minTime for slowest analysis from 0.1 to 0.01 sec (on a very fast and cached website, the results were empty, which is not ideal)
1390bbc
- major refactoring: implementation of the Status class summarizing useful information for analyzers/exporters (replaces the JsonOutput over-use) + implementation of basic analyzers (404, redirects, slow/fast URLs) + SuperTable component that exports data to text and HTML + choice of memory-limit setting + change of some default values
efb9a60
- url parsing: fixes for cases when query params are used with htm/html/php/asp etc. + mini readme fix
af1acfa
- minor refactoring: renaming about core options, small non-functional changes
1dd258e
- major refactoring: better modularity and auto loading in the area of the exporters, analyzers, their configurability and help auto-building + new mailer options --mail-from-name and --mail-subject-template
0c57dbd
- json output: automatic shortening of the URL according to the text width of the console, because if the long URL exceeds the width of the window, the rewriting of the line with the progressbar stops working properly
106332b
- manual exit: captures CTRL+C and ends with the statistics for at least the current URLs
7f4fc80
- error handling: show red error with help when queue or visited tables are full and info how to fix it
4efbd73
- DOM elements: implemented DOM elements counter and when you add 'DOM' to --headers-to-column you will see DOM elements count
1837a9c
- sitemap and no-color: implemented xml/txt sitemap generator and --no-color option
f9ade44
- readme: added table of contents and rewrited intro, features and installation chapters
469fd1c
- readme: removed deprecated and duplicate mailer docs
c5effe8
- readme and CLI help: dividing the parameters into clear groups and improving parameters description - in README.md is detailed form, in CLI instructions is a shorter version.
19ff724
- include/ignore regex: added option to limit crawled URLs with the common combination of --include-regex and --ignore-regex
88e393d
- html report: masking passwords, styling, added logo, better info ordering and other small changes
4cdcdab
- mailer & exports: implemented ability to send HTML report to e-mail via SMTP + exports to HTML/JSON/TXT file + better reporting of HTTP error conditions (timeout, etc.) + requests for assets are sent only as HEAD without the need to download all binary data + updated documentation
a97c29d
- table output: option to set expected column length for better look by 'X-Cache(10)'
e44f89d
- output: renamed print*() methods to more meaningul add*() relevant also for JSON output
1069c4a
- options: default timeout decreased from 10 to 3, --table-url-column-size renamed to --url-column-size and decreased its default value from 100 to 80, new option --hide-progress-bar, changed --truncate-url-to-column-size to --do-not-truncate-url
e75038c
- readme: improved documentation describing use on Windows, macOS or arm64 Linux
baf2d05
- readme: added info about really tested crawler on Windows with Cygwin (Cygwin has some output limitations and it is not possible to achieve such nice behavior as on Linux)
1f195c0
- windows compatibility: ensuring compatibility with running through cygwin Swoole, which I recommend in the documentation for Windows users
c22cc45
- json output: implemented nice continuos progress reporting, intentionally on STDERR so the output on STDOUT can be used to save JSON to file + improved README.md
c095249
- limits: increased limit of max queue length from 1000 to 2000 (this default will more suitable even for medium-sized websites)
c8c3312
- major refactoring: splitting the code into classes, improving error handling and implementing other functions (JSON output, assets crawling)
f6902fc
- readme: added information how to use crawler with Windows, macOS or arm64 architecture + a few other details
721f4bb
- url parsing: handled situations when relative or dotted URLs are also used in HTML, e.g. href='sub/page', href='./sub/page' or href='../sub/page', href='../../sub/page' etc. + few minor optimizations
c2bbf72
- memory allocation: added optional params --max-queue-length=<n> (default 1000), --max-visited-urls=<n> (default 5000) and --max-url-length=<u> (default 2000)
947a43f
- Initial commit with first version 2023.10.1
7109788