Minor interface changes, for user-friendliness - in Links results, 'By Link' is now 'Link URLs' and 'Flat view' is 'All Links'.
Adds Headings table to SEO results (headings are still available in the main view).
A double-click on a warning in the warnings table will open an inspector which will usually show a clip from the page source in the area of the problem. Sometimes it can be difficult to find the problem in a page, even given a line number.
Adds ability to see context for warnings.
Fixes problem - multiple instances of the same page may have appeared in the 'Appears on' list in the link inspector, if the anchor feature was turned on.
Minor correction with one of the warnings - p within heading - the warning said that p can only contain inline content (which is true) but in this case it should have said heading tags can only contain inline content.
NB Integrity and Scrutiny support pages with multiple head sections (with warning), no head tags, no head or body tags.
Fixes issue where multiple sections would prevent proper parsing of some of the information in the head and could lead to incorrect warnings of missing title or missing description.
if image url is empty, alt text warning says "empty" for image url rather than being blank.
Fixes issue with the link inspector, not visiting / highlighting / locating the selected page.
Sitesucker retry 404 errors update#
Important update for users of 10.4.1 and 10.4.2.
Minor tweaks to the server request fields that Scrutiny sends with every request.
Adds itms: and itms-apps: (direct links to app store app) to the types of link that Scrutiny doesn't attempt to check.
Fixes bug that could cause inconsistent results if a scan run, and then run again without quitting the app or switching website configs.
licence key wasn't being shown in the About box, is now.
For each request, the request header field Cache-Control is now set to no-cache (rather than max-age=0) which may be the better way to force a fresh version of the page.
Historic reasons for the old behaviour no longer apply.)
'handle cookies' setting is now on by default (previously, bad urls were retried and cookie handling was always on for the retry regardless of whether cookie setting was switched on in the website profile.
When Page analysis is selected from the Tools menu, it's pre-filled with the starting url of the current selected website (the one in focus if more than one Scrutiny window is open).
It's up to you not to exceed your daily quota (which is currently 10,000 urls per day).
If you have more than 500 pages, then they will be sent in batches of 500 as required.
You will need to fill in your Bing API key in Preferences > Sitemap.
The existing preference to save locally/save and submit/just submit now applies to the Bing Webmaster XML as well as sitemap XML.
Below the option to export an XML sitemap, you can now export 'Bing Webmaster XML'.
IndexNow is intended to allow you to quickly inform multiple search engines of changes, and for those changes to be reflected quickly in search results.
Adds option to generate and submit Bing Webmaster XML batch files.
Couple of fixes to blacklisting of directories when using orphan check / ftp.
Now this situation is taken into account and such a match works as you would reasonably expect.
if term contained a trailing slash, such as '/mac/scrutiny/', a url such as peacockmedia.software/mac/scrutiny would have failed to match because of the trailing slash and a strict string match.
if more than one such rule is being used, they wouldn't have played together.
Fixes a couple of problems when using 'do not follow urls that do not contain' rules for whitelisting parts of a site:.
Improvements to the parsing of image srcsets.
But a url such as peacockmedia.software/mac/scrutiny/ is assumed to be a directory and the scan will be limited to /scrutiny/ A url such as peacockmedia.software/mac/scrutiny will be assumed to be a page called scrutiny and the crawl will be limited to /mac/.
Note that when using the list of deep links, the trailing slash is important.
(It is also possible to do this by setting up 'whitelist' rules, but this relies on there being links on your starting url to the areas that you want to scan.).
Now the 'down but not up' rule is applied to urls in that list, so it's possible to selectively crawl sections of a single site. It's been possible to make a list of urls to different domains in order to scan multiple sites in one scan / one set of results.