╔══════════════════════════════════════════════════════════════════════╗ ║ AUDITMY.SITE — LARAVEL APPLICATION BUILD INSTRUCTIONS ║ ║ For use with Claude Code on cPanel terminal ║ ╚══════════════════════════════════════════════════════════════════════╝ ======================================================================= HOW TO USE THIS FILE ======================================================================= 1. Upload this file (instructions.txt) AND the JSX preview file (audit-preview.jsx) to your server — e.g. /home/yourusername/ 2. Open your cPanel terminal 3. Run Claude Code with this prompt: ─────────────────────────────────────────────────────────────────────── PROMPT TO PASTE INTO CLAUDE CODE ─────────────────────────────────────────────────────────────────────── Read the instructions file at /home/auditmysitedevel/instructions.txt and the JSX preview at /home/auditmysitedevel/audit-preview.jsx carefully. Build the complete Laravel 10 application for AuditMy.Site following every specification in the instructions. The JSX file shows the exact UI and data format the frontend expects — the backend MUST produce JSON matching that structure. The application should be created at /home/auditmysitedevel/auditmy-site/ Start by reading both files in full, then work through the instructions section by section. Create every file listed. Test each PHP file for syntax errors after creating it. ─────────────────────────────────────────────────────────────────────── END OF PROMPT ─────────────────────────────────────────────────────────────────────── ======================================================================= TABLE OF CONTENTS ======================================================================= 1. PROJECT OVERVIEW 2. TECH STACK & DEPENDENCIES 3. ENVIRONMENT VARIABLES 4. DATABASE SCHEMA (MIGRATION) 5. FILE STRUCTURE 6. DATA FORMAT SPECIFICATION (CRITICAL) 7. BACKEND: PageFetcher 8. BACKEND: OnPageAnalyser (SEO — with evidence) 9. BACKEND: TechnicalAnalyser (SEO — with evidence) 10. BACKEND: ContentAnalyser (SEO — with evidence) 11. BACKEND: CroAiAnalyser (AI — desktop + mobile split) 12. BACKEND: SeoAuditor (orchestrator) 13. BACKEND: AuditController (API + Stripe + webhooks) 14. BACKEND: AdminController (dashboard) 15. FRONTEND: audit.blade.php (main tool) 16. FRONTEND: Admin views 17. ROUTES 18. CONFIG FILES 19. DEPLOYMENT NOTES ======================================================================= 1. PROJECT OVERVIEW ======================================================================= AuditMy.Site is a paid SEO + CRO audit tool. Flow: 1. User enters a URL 2. BOTH analyses run immediately: a. SEO audit runs instantly (code-based, ~2 seconds) b. CRO audit triggers in parallel (Claude API, ~10-20 seconds) 3. User sees FREE PARTIAL REPORT: - SEO: All 3 category scores + first 3 findings per category (title + badge only, no description/evidence/recommendation). Remaining findings shown blurred behind paywall. - CRO: Desktop + Mobile scores + first 1 finding per category (title + badge only). Remaining findings blurred. - Overall scores visible (SEO, CRO Desktop, CRO Mobile) 4. PAYWALL: "Unlock Full Report — £14.99" - User enters name + email → lead saved - User pays via Stripe Checkout 5. After payment → FULL REPORT unlocks: - ALL SEO findings with full descriptions, evidence, and recommendations (expandable/collapsible cards) - ALL CRO findings (desktop + mobile) with full descriptions, evidence, and recommendations - Priority Actions tab (top 10 fixes ranked by severity) Two types of analysis: - SEO (code-based): Instant, deterministic. Runs PHP code against the HTML DOM. 22+ checks across 3 categories. Partially shown for free, full details behind paywall. - CRO (AI-based): Uses Claude Haiku 4.5 API. Two separate API calls — one for desktop CRO, one for mobile CRO. Each returns 4-5 categories with 3-5 findings per category. Partially shown for free, full details behind paywall. IMPORTANT — The CRO analysis runs for FREE on every audit. This is a deliberate choice: showing the user their CRO scores and a taste of the AI findings makes the paywall far more compelling than only showing SEO data. The API cost per audit is ~£0.03 (Haiku 4.5), so even if only 5% of users pay, each paying user covers the cost of 250 free audits. Branding: Powered by Visionsharp (configurable in .env) ======================================================================= 2. TECH STACK & DEPENDENCIES ======================================================================= composer.json: { "name": "visionsharp/auditmy-site", "type": "project", "description": "SEO + AI-powered CRO audit tool", "require": { "php": "^8.1", "laravel/framework": "^10.0", "guzzlehttp/guzzle": "^7.8", "stripe/stripe-php": "^13.0", "ext-dom": "*", "ext-mbstring": "*", "ext-curl": "*" }, "require-dev": { "fakerphp/faker": "^1.23", "phpunit/phpunit": "^10.1" }, "autoload": { "psr-4": { "App\\": "app/" } }, "minimum-stability": "stable", "prefer-stable": true } ======================================================================= 3. ENVIRONMENT VARIABLES ======================================================================= Create .env.example with these variables: APP_NAME=AuditMySite APP_ENV=production APP_KEY= APP_DEBUG=false APP_URL=https://auditmy.site DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=auditmy_site DB_USERNAME=root DB_PASSWORD= # Stripe STRIPE_KEY=pk_live_xxx STRIPE_SECRET=sk_live_xxx STRIPE_WEBHOOK_SECRET=whsec_xxx # Anthropic (Claude API for CRO) ANTHROPIC_API_KEY=sk-ant-xxx ANTHROPIC_MODEL=claude-haiku-4-5-20251001 # Audit config AUDIT_PRICE=14.99 AUDIT_CURRENCY=gbp AUDIT_TIMEOUT=15 AUDIT_USER_AGENT=AuditMySite/1.0 # Branding BRAND_NAME=Visionsharp BRAND_URL=https://www.visionsharp.co.uk CONTACT_URL=https://www.visionsharp.co.uk/about-us/contact-us/ # Queue (for async CRO) QUEUE_CONNECTION=database ======================================================================= 4. DATABASE SCHEMA ======================================================================= Create migration: database/migrations/2026_02_25_000001_create_tables.php Table: audits - id: uuid, primary key - url: string(500) - host: string(255), indexed - seo_score: integer, default 0 - cro_score: integer, default 0 - cro_desktop_score: integer, default 0 ← NEW - cro_mobile_score: integer, default 0 ← NEW - overall_score: integer, default 0 - seo_result_json: longText, nullable - cro_result_json: longText, nullable (stores BOTH desktop+mobile) - full_result_json: longText, nullable - cro_status: enum(pending,processing,complete,failed), default pending - ip_address: string(45), nullable - user_agent: string(500), nullable - referrer: string(500), nullable - timestamps - index on created_at Table: leads - id: uuid, primary key - name: string - email: string, indexed - company: string, nullable - phone: string(50), nullable - audit_id: uuid, nullable, indexed - url: string(500), nullable - score: integer, nullable - source: string, default 'audit' - timestamps Table: payments - id: uuid, primary key - audit_id: uuid, indexed - lead_id: uuid, nullable, indexed - stripe_session_id: string, nullable, unique - stripe_payment_intent: string, nullable - stripe_customer_id: string, nullable - amount: integer (in pence) - currency: string(3), default 'gbp' - status: enum(pending,processing,paid,failed,refunded), default pending - email: string, nullable - stripe_metadata: json, nullable - timestamps - index on status, created_at Table: admin_users - id: auto-increment - name: string - email: string, unique - password: string - remember_token - timestamps Table: webhook_logs - id: auto-increment - event_type: string - stripe_event_id: string, nullable - payload: json, nullable - status: enum(processed,failed,ignored), default processed - error: text, nullable - timestamps ======================================================================= 5. FILE STRUCTURE ======================================================================= auditmy-site/ ├── app/ │ ├── Http/ │ │ └── Controllers/ │ │ ├── AuditController.php │ │ └── AdminController.php │ └── Services/ │ ├── PageFetcher.php │ ├── SeoAuditor.php │ └── Analysers/ │ ├── OnPageAnalyser.php │ ├── TechnicalAnalyser.php │ ├── ContentAnalyser.php │ └── CroAiAnalyser.php ├── config/ │ ├── audit.php │ └── services.php ├── database/ │ └── migrations/ │ └── 2026_02_25_000001_create_tables.php ├── resources/ │ └── views/ │ ├── audit.blade.php │ └── admin/ │ ├── layout.blade.php │ ├── dashboard.blade.php │ ├── audits.blade.php │ ├── audit-detail.blade.php │ ├── leads.blade.php │ ├── payments.blade.php │ ├── webhooks.blade.php │ └── pagination.blade.php ├── routes/ │ └── web.php ├── composer.json ├── .env.example └── README.md ======================================================================= 6. DATA FORMAT SPECIFICATION (CRITICAL — READ CAREFULLY) ======================================================================= This is the EXACT JSON format the frontend expects. The backend MUST produce data matching this structure. Look at the JSX preview file for the full example data. ─── SEO Finding Format ─── Every SEO finding (from OnPageAnalyser, TechnicalAnalyser, ContentAnalyser) MUST use this format: { "s": "pass|warn|fail|info", "t": "Short title with specific data, e.g. 'Title tag: \"My Title\" (42 chars)'", "d": "2-3 sentence description explaining why this matters", "r": "Specific actionable recommendation (null for pass findings)", "evidence": [ {"label": "Current title", "value": "\"My Title Here\"", "color": "#F5C542"}, {"label": "Length", "value": "42 characters", "color": "#22C97A"}, ... ], "evidenceLabel": "Title tag details" } CRITICAL: The "evidence" array is what makes this tool premium. Every finding MUST include an evidence array showing the actual data found on the page. This is what the user is paying for. Examples of evidence by check type: Title tag: - Current title text (exact) - Character count - Optimal range Meta description: - Current description text (exact, or "Not found") - Character count Headings: - Every H1 tag with its full text - Every H2 tag with its full text - Every H3 tag with its full text - Count of each level Images: - Total count - Count with alt / missing alt / empty alt - List of specific image src paths that are missing alt - For images WITH alt, show the alt text too if there are few Links: - Count of internal (unique) vs total - Count of external - List of unique internal link destinations - List of external link destinations Open Graph: - Status of each OG tag (og:title, og:description, og:image, og:url, og:type) — show actual value or "MISSING" Canonical: - The actual canonical URL found SSL: - Protocol (HTTP vs HTTPS) - Certificate status Response time: - Actual TTFB in milliseconds - Page size in bytes/KB - Redirect count Structured data: - JSON-LD found: yes/no (if yes, show the @type values) - Microdata found: yes/no - RDFa found: yes/no Robots.txt: - Status code - Key directives found (Disallow, Sitemap) Sitemap: - Location URL - Number of URLs listed Word count: - Exact word count - Paragraph count And so on for every check... The key principle: if you found specific data on the page, SHOW IT in the evidence array. Never just say "2 H1 tags found" — show the actual text of both H1 tags. ─── Evidence "color" Values ─── Use these hex colours in evidence items to indicate status: Pass/good: "#22C97A" Warning: "#F5C542" Fail/bad: "#EF5350" Neutral/info: "#7B8CA2" (or omit color for default white) ─── SEO Category Format ─── { "id": "onpage", "title": "On-Page SEO", "accent": "#00E09E", "findings": [ ...finding objects... ], "score": 42 } Categories: onpage → accent "#00E09E" → title "On-Page SEO" technical → accent "#6C8EEF" → title "Technical SEO" content → accent "#A78BFA" → title "Content Quality" ─── Full SEO API Response (POST /api/audit) ─── { "success": true, "audit_id": "uuid", "url": "https://example.com", "final_url": "https://www.example.com", "host": "www.example.com", "seo_score": 47, "seo_categories": [ ...category objects... ], "seo_priorities": [ ...priority objects... ], "meta": { "status_code": 200, "response_time": 1847, "page_size": 287400, "is_https": true, "redirect_count": 1, "audited_at": "2026-02-25T14:30:00+00:00" } } ─── CRO Finding Format ─── CRO findings come from the Claude API. The prompt instructs Claude to return findings in this SAME format: { "s": "warn", "t": "Short specific title referencing this site", "d": "2-3 sentence analysis specific to this page, citing research", "r": "Specific actionable recommendation with examples", "evidence": [ {"label": "Hero CTA", "value": "\"Contact Us\"", "color": "#F5C542"}, {"label": "Footer CTA", "value": "\"Send Message\"", "color": "#F5C542"} ], "evidenceLabel": "All CTA buttons on page" } IMPORTANT: The Claude API prompt MUST instruct Claude to include evidence arrays in its findings. This is what makes the CRO analysis premium — showing the actual button copy, form fields, heading text etc. that Claude found on the page. ─── CRO Response Format (stored in cro_result_json) ─── { "cro_score": 38, "cro_desktop_score": 42, "cro_mobile_score": 31, "desktop": { "categories": [ { "id": "cta", "title": "Calls to Action", "accent": "#F59E0B", "score": 35, "findings": [ ...findings with evidence... ] }, ...4 more categories ] }, "mobile": { "categories": [ { "id": "cta_mob", "title": "Mobile CTAs & Touch Targets", "accent": "#F59E0B", "score": 22, "findings": [ ...findings with evidence... ] }, ...3-4 more categories ] } } ─── CRO Status Endpoint (GET /api/audit/{id}/cro-status) ─── Returns scores only (not full findings): { "status": "complete", "cro_score": 38, "cro_desktop_score": 42, "cro_mobile_score": 31 } Or while processing: { "status": "processing" } ─── FREE vs PAID Data (GET /api/audit/{id}) ─── This endpoint checks if a paid payment exists for the audit. When NOT PAID (is_paid: false), findings are stripped down: { "is_paid": false, "seo_score": 47, "cro_score": 38, "cro_desktop_score": 42, "cro_mobile_score": 31, "overall_score": 43, "host": "example.com", "meta": { ...full tech meta... }, "seo_categories": [ { "id": "onpage", "title": "On-Page SEO", "score": 42, "accent": "#00E09E", "total_findings": 9, "findings": [ {"s": "warn", "t": "Title tag: too short (20 chars)"}, {"s": "fail", "t": "Meta description: Missing"}, {"s": "warn", "t": "Headings: 2 H1 tags found"} ] }, ... ], "cro_desktop_categories": [ { "id": "cta", "title": "Calls to Action", "score": 35, "accent": "#F59E0B", "total_findings": 4, "findings": [ {"s": "warn", "t": "Hero CTA says 'Contact Us'"} ] }, ... ], "cro_mobile_categories": [ ...same pattern... ] } Note: each category includes "total_findings" (the real count) so the frontend can display "X more findings hidden". The findings array only contains {s, t} — no d, r, evidence, or evidenceLabel. SEO shows first 3 per category, CRO shows first 1 per category. When PAID (is_paid: true), return EVERYTHING: { "is_paid": true, "seo_score": 47, "cro_score": 38, ...all scores... "seo_categories": [ { ...full categories with ALL findings including s, t, d, r, evidence, evidenceLabel... } ], "cro_desktop_categories": [ ...full... ], "cro_mobile_categories": [ ...full... ], "seo_priorities": [ ...top 10 SEO fixes... ], "cro_priorities": [ ...top 10 CRO fixes... ] } ======================================================================= 7. BACKEND: PageFetcher (app/Services/PageFetcher.php) ======================================================================= Purpose: Fetches a URL and returns HTML + metadata. Uses Guzzle. Key features: - Custom user agent from config('audit.user_agent') - Timeout from config('audit.timeout', 15) - Follow redirects, track redirect count - Record response time (TTFB) - Record final URL after redirects - Record body size in bytes - Record is_https boolean - Record status_code - Separate methods: fetch($url), fetchRobotsTxt($url), fetchSitemap($url) Return format from fetch(): { "success": true, "body": "...", "final_url": "https://www.example.com", "status_code": 200, "response_time": 1847, // milliseconds "body_size": 287400, // bytes "is_https": true, "redirect_count": 1 } ======================================================================= 8. BACKEND: OnPageAnalyser (app/Services/Analysers/OnPageAnalyser.php) ======================================================================= Constructor: receives $html (string), $url (string) Method: analyse() returns array of findings Checks to implement (each returns ONE finding with evidence): 1. TITLE TAG - Extract tag text and length - Evidence: actual title text, character count, optimal range - Fail if missing, warn if <20 or >60 chars, pass otherwise 2. META DESCRIPTION - Extract <meta name="description"> content - Evidence: actual description text (or "Not found"), char count - Fail if missing, warn if <70 or >160 chars, pass otherwise 3. HEADINGS (H1-H6) - Count every heading level - Extract the FULL TEXT of every H1, H2, H3 tag - Evidence: every heading with its level and full text e.g. {label:"H1 #1", value:'"Welcome to JB Plumbing"'} {label:"H1 #2", value:'"Our Services"', color:"#EF5350"} {label:"H2 tags", value:"0 found", color:"#EF5350"} {label:"H3 #1", value:'"Boiler Repairs"'} - Fail if 0 H1, warn if >1 H1 (show all H1 text), pass if exactly 1 with good structure 4. IMAGES + ALT TEXT - Count total, with-alt, missing-alt, empty-alt - Evidence: counts AND list of specific images missing alt with their src paths. Also list images with empty alt="" Show first 10-15 images max to avoid huge evidence arrays. - Fail if <60% coverage, warn if <90%, pass if >=90% 5. INTERNAL + EXTERNAL LINKS - Count unique internal links (same host), external links - Also count total links (including nav/footer duplicates) - Evidence: unique internal link destinations (href), external link destinations (href+text), total vs unique count Show first 10-15 links max. - Warn if 0 external or very few unique internal, pass otherwise 6. OPEN GRAPH TAGS - Check for og:title, og:description, og:image, og:url, og:type - Evidence: each OG tag with actual value or "MISSING" - Fail if <3/5 present, warn if <5/5, pass if all present 7. CANONICAL TAG - Extract <link rel="canonical"> href - Evidence: actual canonical URL (or "Not found") - Fail if missing, warn if doesn't match current URL, pass if self-referencing 8. META ROBOTS - Check for <meta name="robots"> content - Evidence: actual content value (or "Not set — defaults to index, follow") - Fail if noindex, warn if nofollow, pass/info otherwise 9. LANGUAGE ATTRIBUTE - Check <html lang="..."> - Evidence: lang value (or "Not set") - Warn if missing, pass if present ======================================================================= 9. BACKEND: TechnicalAnalyser (app/Services/Analysers/TechnicalAnalyser.php) ======================================================================= Constructor: receives $html, $fetchResult (array from PageFetcher), $robotsTxt (string|null), $sitemap (string|null) Method: analyse() returns array of findings Checks: 1. SSL / HTTPS - Check if final URL uses https - Evidence: Protocol, Certificate status - Fail if HTTP, pass if HTTPS 2. RESPONSE TIME (TTFB) - Use response_time from fetch result - Evidence: TTFB in ms, recommended threshold, page size, redirect count - Fail if >2000ms, warn if >800ms, pass if <800ms 3. REDIRECT CHAIN - Use redirect_count from fetch result - Evidence: each hop in the redirect chain if possible (at minimum: original URL → final URL) - Warn if >0 redirects, pass if 0 4. ROBOTS.TXT - Check if robots.txt was fetched successfully - Evidence: status, key directives found (Disallow lines, Sitemap declarations) - Fail if missing/404, pass if found 5. XML SITEMAP - Check if sitemap was found (direct or via robots.txt) - Evidence: location URL, approximate URL count if parseable - Fail if missing, pass if found 6. STRUCTURED DATA - Search HTML for JSON-LD (<script type="application/ld+json">), microdata (itemscope/itemtype), RDFa (typeof/property) - If JSON-LD found, extract @type values - Evidence: JSON-LD found/not + @type values, Microdata found/not, RDFa found/not - Fail if none found, pass if any found 7. VIEWPORT META - Check for <meta name="viewport"> - Evidence: actual content value - Fail if missing, pass if present 8. PAGE SIZE - Use body_size from fetch result - Evidence: size in bytes and KB, recommended range - Warn if >500KB HTML, pass if under 9. CHARACTER ENCODING - Check for <meta charset="..."> or Content-Type header - Evidence: charset value found - Warn if missing, pass if UTF-8 ======================================================================= 10. BACKEND: ContentAnalyser (app/Services/Analysers/ContentAnalyser.php) ======================================================================= Constructor: receives $html (string) Method: analyse() returns array of findings Checks: 1. WORD COUNT - Strip tags, count words in body content - Evidence: word count, paragraph count, recommended range - Fail if <100 words, warn if <300, pass if >=300 2. READABILITY - Check paragraph lengths, presence of sub-headings - Evidence: average paragraph length, longest paragraph, sub-heading frequency - Warn if paragraphs are very long or no sub-headings 3. HEADING STRUCTURE / HIERARCHY - Check if headings follow logical order (H1→H2→H3) - Evidence: heading hierarchy list showing any skipped levels - Warn if levels are skipped (e.g. H1 then H4) 4. INTERNAL LINK DENSITY - Ratio of links to word count - Evidence: links count, word count, ratio - Info/warn if very low or very high 5. CONTENT FRESHNESS SIGNALS - Look for dates, "updated", "published" timestamps - Evidence: any dates found, or "No date signals found" - Info finding 6. FAQ CONTENT - Look for FAQ sections, <details>/<summary>, Q&A patterns - Evidence: "FAQ section found" or "No FAQ content detected" - Warn if not found (suggest adding FAQ + schema) ======================================================================= 11. BACKEND: CroAiAnalyser (app/Services/Analysers/CroAiAnalyser.php) ======================================================================= This is the most critical file. It calls the Claude Haiku 4.5 API TWICE: once for desktop CRO analysis, once for mobile CRO analysis. Total cost per audit: ~$0.04 (£0.03). Constructor: receives $html (string), $url (string) ─── Content Extraction ─── Method: extractContent() returns associative array Extract from the DOM: - host: domain name - url: full URL - headings: array of "H1: actual text", "H2: actual text" etc. - ctas: array of button/link text that look like CTAs (buttons, links with class containing btn/button/cta, input[type=submit]) - forms: array of form objects, each containing: - action: form action URL - method: GET/POST - fields: array of {name, type, placeholder, required} - submit_text: text of the submit button - trust_signals: array of trust-related text found (look for: review, testimonial, guarantee, certified, rated, award, trust, secure, verified, accredited, years, experience, star, rating — extract surrounding context) - images: array of first 20 image src+alt pairs - meta_description: the meta description text - body_text: first 4000 chars of body text (strip tags, compress whitespace) - phone_numbers: any tel: links or phone patterns found - nav_items: text of navigation links ─── Desktop CRO Prompt ─── Build a prompt that includes the extracted content as JSON and instructs Claude to analyse for DESKTOP conversion optimisation. CRITICAL: The prompt MUST instruct Claude to return findings with "evidence" arrays. Here is the exact prompt template: ---BEGIN DESKTOP PROMPT--- You are a senior Conversion Rate Optimisation consultant performing a DESKTOP analysis of {host}. You have extracted the following data from the page. ## Extracted Page Data {JSON of extracted content} ## Instructions Analyse this page for DESKTOP conversion optimisation across these categories. Return your findings as JSON. CRITICAL REQUIREMENTS FOR EVERY FINDING: 1. Each finding MUST include an "evidence" array showing the actual data you found on the page (button text, heading text, form fields, etc.) 2. Each finding MUST include an "evidenceLabel" string (e.g. "All CTA buttons on page", "Form fields", "Hero copy") 3. Evidence items use format: {"label": "...", "value": "...", "color": "#hex"} where color is: - "#22C97A" for good/pass items - "#F5C542" for warnings - "#EF5350" for critical/fail items - "#7B8CA2" for neutral info Categories to analyse: 1. "cta" — "Calls to Action" Analyse: button copy, CTA placement, visibility, specificity, phone number accessibility, urgency/benefit language. Evidence to include: all CTA button text found, phone number location and format, whether phone is clickable (tel: link). 2. "trust" — "Trust & Social Proof" Analyse: testimonials, reviews, certifications, case studies, before/after, social proof, guarantees, credentials. Evidence to include: trust signals found or not found, badge images, review widgets present/absent. 3. "forms" — "Lead Capture & Forms" Analyse: number of form fields, field types, submit button copy, validation, form placement, friction. Evidence to include: every form field with its type/name/ placeholder, submit button text. 4. "messaging" — "Messaging & Value Proposition" Analyse: headline copy, sub-headlines, benefit vs feature language, pricing transparency, unique selling points. Evidence to include: actual headline text, sub-heading text, key copy phrases. 5. "ux" — "Page Experience & Conversion Flow" Analyse: above-the-fold content, sticky elements, scroll depth to CTA, visual hierarchy, stock vs real imagery. Evidence to include: what's visible above fold, distance to first CTA, navigation structure. Rules: - Each category: 3-5 findings - Status: "pass", "info", "warn", "fail" - Reference the actual domain "{host}" throughout - Quote actual text you see in the extracted data - Cite CRO research (CXL, Baymard, Nielsen Norman, HubSpot) - warn/fail findings MUST have "r" (recommendation) - pass findings: r = null - EVERY finding must have evidence array and evidenceLabel - Return ONLY valid JSON, no markdown, no backticks, no preamble JSON structure: { "categories": [ { "id": "cta", "title": "Calls to Action", "findings": [ { "s": "warn", "t": "Short title about specific issue", "d": "Description citing specific content and research", "r": "Specific recommendation with examples", "evidence": [ {"label": "...", "value": "...", "color": "..."} ], "evidenceLabel": "..." } ] } ] } ---END DESKTOP PROMPT--- ─── Mobile CRO Prompt ─── Similar to desktop but focused on MOBILE-SPECIFIC issues. ---BEGIN MOBILE PROMPT--- You are a senior CRO consultant performing a MOBILE-SPECIFIC analysis of {host}. The same page data is below but you must evaluate it from a mobile user's perspective. ## Extracted Page Data {JSON of extracted content} ## Instructions Analyse this page for MOBILE conversion optimisation. Focus on mobile-specific issues that don't apply to desktop. Categories to analyse: 1. "cta_mob" — "Mobile CTAs & Touch Targets" Focus: tap-to-call phone links (tel:), touch target sizes (minimum 48x48px), button spacing, floating/sticky CTAs on mobile, click-to-call prominence. 2. "trust_mob" — "Mobile Trust Signals" Focus: trust badge sizes on small screens, social proof visibility in first mobile viewport, review widgets on mobile, above-fold trust indicators. 3. "forms_mob" — "Mobile Form Experience" Focus: input types (type="tel" for phone, type="email"), form layout (single vs multi-column on mobile), keyboard optimisation, autocomplete attributes, submit button visibility when keyboard is open. 4. "ux_mob" — "Mobile Page Experience" Focus: estimated load time on 3G, image lazy loading, text size readability (minimum 16px), navigation style (hamburger vs full), horizontal scroll issues, viewport configuration. Same rules as desktop: 3-5 findings per category, evidence arrays required, cite research, reference {host}, quote actual content. Return ONLY valid JSON. JSON structure: { "categories": [ { "id": "cta_mob", "title": "Mobile CTAs & Touch Targets", "findings": [...] } ] } ---END MOBILE PROMPT--- ─── API Calling ─── Make TWO separate API calls to the Anthropic messages endpoint: POST https://api.anthropic.com/v1/messages Headers: Content-Type: application/json x-api-key: {from config} anthropic-version: 2023-06-01 Body: model: config('services.anthropic.model') max_tokens: 4000 messages: [{role: "user", content: $prompt}] Parse the response: $body['content'][0]['text'] Clean JSON (strip ```json``` fences if present) json_decode and validate structure ─── Scoring ─── Score each category based on findings: pass = 100, info = 70, warn = 40, fail = 0 Category score = average of finding scores (rounded int) Add accent colours: cta/cta_mob → #F59E0B trust/trust_mob → #34D399 forms/forms_mob → #818CF8 messaging → #FB923C ux/ux_mob → #F87171 ─── Return Format ─── Method: analyse() returns: { "cro_score": 38, // average of desktop + mobile "cro_desktop_score": 42, // average of desktop categories "cro_mobile_score": 31, // average of mobile categories "desktop": { "categories": [ ...scored desktop categories... ] }, "mobile": { "categories": [ ...scored mobile categories... ] } } Return null on failure (log the error). ======================================================================= 12. BACKEND: SeoAuditor (app/Services/SeoAuditor.php) ======================================================================= Orchestrator class. Two main methods: 1. auditSeo($url) → runs instant SEO checks - Uses PageFetcher to fetch URL, robots.txt, sitemap - Runs OnPageAnalyser, TechnicalAnalyser, ContentAnalyser - Builds categories array with scores - Returns JSON matching the format in section 6 - ALSO returns 'html' key (raw HTML) for CRO to use later 2. auditCro($html, $url) → runs AI CRO analysis (Haiku 4.5) - Creates CroAiAnalyser - Calls analyse() which makes 2 API calls (desktop + mobile) - Returns the CRO result object (desktop + mobile + scores) - Called separately via the /trigger-cro endpoint so the SEO response can return instantly without waiting for the AI Helper methods: - score($findings) → calculates score from findings array - priorities($categories) → extracts top 10 actionable findings sorted by severity ======================================================================= 13. BACKEND: AuditController (app/Http/Controllers/AuditController.php) ======================================================================= Endpoints: POST /api/audit - Validates: url (required, string, max 500) - Runs SeoAuditor::auditSeo() → instant SEO results - Generates UUID for audit_id - Stores in audits table (seo_result_json, cro_status='processing') - IMMEDIATELY triggers CRO analysis in the background: - Set cro_status = 'processing' - Call SeoAuditor::auditCro($html, $url) - On success: store cro_result_json, set scores, status=complete - On failure: set cro_status='failed' NOTE: Since this is a synchronous PHP request on shared hosting (no queue workers on basic cPanel), the CRO analysis will run AFTER the SEO response is returned to the frontend. There are two approaches — choose the simplest for your hosting: APPROACH A (Simple — recommended for cPanel): Return SEO results immediately to the frontend. The frontend then calls GET /api/audit/{id}/trigger-cro which runs the CRO analysis synchronously (takes 10-20 seconds) and returns the results. Frontend shows a loading spinner during. APPROACH B (If queue workers are available): Dispatch a queued job that runs the CRO analysis. Frontend polls GET /api/audit/{id}/cro-status every 3 seconds. USE APPROACH A. Add this endpoint: GET /api/audit/{id}/trigger-cro - Fetch the audit from DB - If cro_status is already 'complete', return the CRO results - If cro_status is 'processing' or 'pending': - Re-fetch HTML via PageFetcher - Run SeoAuditor::auditCro($html, $url) - Store results (cro_result_json, scores, status) - Return the CRO results - Frontend calls this after showing SEO teaser Response format: { "status": "complete", "cro_score": 38, "cro_desktop_score": 42, "cro_mobile_score": 31, "desktop": { "categories": [...] }, "mobile": { "categories": [...] } } Return SEO data to frontend with structure: { "success": true, "audit_id": "uuid", "url": "...", "host": "...", "seo_score": 47, "seo_categories": [...], "seo_priorities": [...], "meta": { status_code, response_time, page_size, ... } } POST /api/lead - Validates: name, email (required), company, phone, audit_id, url, score (optional) - Stores in leads table - Returns success + lead_id POST /api/checkout - Validates: audit_id (required), email, name (optional) - If STRIPE_SECRET not configured → DEMO MODE: - Mark audit as paid (add 'paid' boolean or check payment exists) - Return {demo: true} - If Stripe configured: - Create Stripe Checkout Session - mode: payment - line_items: [{price_data: {currency, unit_amount in pence, product_data: {name, description}}, quantity: 1}] - success_url: APP_URL/?paid=1&audit_id={id}&session_id= {CHECKOUT_SESSION_ID} - cancel_url: APP_URL/?cancelled=1 - customer_email: $email - metadata: {audit_id: $id} - Store payment record (status: pending) - Return {checkout_url: $session->url} GET /api/verify-payment - Query params: session_id, audit_id - Retrieve Stripe session - If payment_status === 'paid': - Update payment record to paid - Return {success: true, paid: true} - If not paid: return {success: false} GET /api/audit/{id} - Fetch audit from DB - Check if a PAID payment exists for this audit_id - If PAID: - Return FULL audit data (all SEO + CRO findings with descriptions, evidence, recommendations) - If NOT PAID: - Return PARTIAL data: - seo_score, cro_score, cro_desktop_score, cro_mobile_score, overall_score (all scores visible for free) - seo_categories: include all categories with scores, but each category's findings array is LIMITED: → First 3 findings: include only {s, t} (status + title) Do NOT include d, r, evidence, evidenceLabel → Remaining findings: omitted entirely - cro data (if available): same approach: → Desktop categories: first 1 finding per category, only {s, t} → Mobile categories: first 1 finding per category, only {s, t} - meta: full tech meta (this is free to show) - Add flag: "is_paid": false - This ensures the free teaser shows enough to hook the user (scores, category breakdown, finding titles) but hides all the value (descriptions, evidence, recommendations) GET /api/audit/{id}/cro-status - Fetch audit cro_status - If complete: return status + cro_score + cro_desktop_score + cro_mobile_score (scores only, not full findings — those come via GET /api/audit/{id} with payment check) - If processing/pending: return just {status: "processing"} - If failed: return {status: "failed"} POST /webhook/stripe - Exclude from CSRF middleware - Verify Stripe signature using webhook secret - Handle event: checkout.session.completed - Find payment by stripe_session_id, mark as paid - Log webhook to webhook_logs table - Return 200 ======================================================================= 14. BACKEND: AdminController (app/Http/Controllers/AdminController.php) ======================================================================= Dashboard (GET /admin): - Total revenue (sum of paid payments) - Revenue today / this week / this month - Total audits / leads / paid reports - Conversion rate (paid / total audits × 100) - Average SEO score, CRO desktop score, CRO mobile score - CRO status breakdown (complete/pending/failed counts) - Last 10 payments, audits, leads Audits list (GET /admin/audits): - Paginated, 25 per page - Search by URL/host - Filter by cro_status - Show: url, seo_score, cro_desktop_score, cro_mobile_score, overall_score, cro_status badge, created_at Audit detail (GET /admin/audits/{id}): - Full SEO results with all findings and evidence - Full CRO results (desktop + mobile) with all findings - Related lead and payment info Leads (GET /admin/leads): - Paginated, 25 per page - Search by name/email/company/url - CSV export (GET /admin/leads/export) - Delete (DELETE /admin/leads/{id}) Payments (GET /admin/payments): - Paginated, filter by status - Show: email, amount, status, stripe_session_id, domain, date Webhooks (GET /admin/webhooks): - Paginated - Show: event_type, status, error, timestamp ======================================================================= 15. FRONTEND: audit.blade.php ======================================================================= This is the main public-facing page. Build it to match the JSX preview file (audit-preview.jsx) as closely as possible. It should be a SINGLE Blade file containing all HTML, CSS, and JavaScript inline. No build tools, no npm, no Vite. Everything self-contained. Sections / stages (show/hide with JavaScript): 1. INPUT STAGE - Brand pills: "SEO Audit" + "AI CRO Analysis" - Headline: "Find out why your site isn't converting" - Subhead explaining the tool - URL input + "Audit My Site" button - Enter key triggers scan 2. SCANNING STAGE - Spinning border animation - "Analysing example.com" - Status messages that update: → "Running SEO checks..." → "SEO complete. Running AI conversion analysis..." → "Analysing desktop experience..." → "Analysing mobile experience..." → "Building your report..." 3. TEASER / PARTIAL REPORT STAGE (shows free preview + paywall) This is the key conversion stage. The user sees REAL data from their site — enough to prove the tool works and create urgency, but not enough to act on without paying. Layout: a) HEADER — Score overview (all scores visible for free) - Overall score ring (large) - Three score boxes: SEO (47), CRO Desktop (42), CRO Mobile (31) - Tech meta bar: Status code, TTFB, Page size, HTTPS, Redirects - Host name and URL b) SEO SECTION — Partial findings - Section header: "SEO Analysis" with score - Category cards in grid (all 3 categories with scores) - For each category: show first 3 findings → Show ONLY the badge (pass/warn/fail) and title text → Do NOT show description, evidence, or recommendation → These appear as compact single-line rows - After the visible findings: BLURRED SECTION → Show 4-6 more finding rows with CSS blur(4px) and a gradient overlay fading to background colour → These should look like real findings but be unreadable - At the bottom of SEO section: small text "X more findings hidden — unlock full report to see all descriptions, evidence, and recommendations" c) CRO SECTION — Partial findings (if CRO is complete) - Section header: "AI Conversion Analysis" with score - Desktop / Mobile toggle buttons with scores - Category cards in grid (all categories with scores visible) - For each category: show first 1 finding → Badge + title only (same as SEO preview) - Blurred section with remaining findings - Note: if CRO is still loading, show a spinner with "AI analysis in progress... this takes 10-20 seconds" and poll /api/audit/{id}/cro-status every 3 seconds. When complete, call /api/audit/{id} to get partial data and render the CRO teaser. d) PAYWALL CARD (overlapping the blurred content) - Positioned to overlap the blur transition (negative margin) - Glowing border (brand green shadow) - "Unlock Your Full Report" - Subtitle: "See all X SEO findings and Y CRO findings with detailed descriptions, evidence from your site, and specific recommendations to fix every issue." - Feature list (3-4 items): → "Detailed evidence — see exact H tags, image paths, form fields found on your site" → "Specific recommendations — actionable fixes with example code and copy" → "Priority action plan — top 10 fixes ranked by impact" → "Desktop + Mobile CRO — separate AI analysis for each" - Price block: "£14.99" large + "one-time · instant access" - Name + email inputs (side by side) - "Pay £14.99 · Unlock Full Report" button (Stripe purple) - "Secure Stripe payment · No subscription" note - "Want us to fix these issues? Free consultation →" link 4. FULL REPORT STAGE (after payment verified) - Payment confirmed banner (green) - Call GET /api/audit/{id} (now returns full data since paid) - Overall scores: SEO + CRO Desktop + CRO Mobile - Tech meta bar - Section toggle: "SEO Analysis" / "CRO Analysis" - CRO sub-toggle: "🖥 Desktop (42)" / "📱 Mobile (31)" - Tabs: "Overview" / "All Findings" / "Priority Actions" - Overview: score ring + category grid cards - All Findings: FULL collapsible cards with: → Badge + title (always visible) → Click to expand: description, evidence section, recommendation → Evidence sections are collapsible sub-sections ("View evidence (N)" button → expands to show evidence list) - Priority Actions: numbered list of top 10 fixes sorted by severity (fail first, then warn) - CTA: "Want help fixing these issues?" → contact link JavaScript functions needed: - go(stage) — show/hide stages - scan() — POST /api/audit, then immediately call triggerCro() - triggerCro() — GET /api/audit/{id}/trigger-cro (or poll /cro-status if using queue approach) - renderTeaser(seoData, croData) — build partial report + paywall - checkout() — POST /api/lead then POST /api/checkout - loadFullReport() — GET /api/audit/{id} (after payment) - renderFullReport(data) — build complete report with all findings - Handle Stripe return (?paid=1&session_id=&audit_id=) → call GET /api/verify-payment first → then call GET /api/audit/{id} to get full data → render full report IMPORTANT FLOW DETAIL: After scan() returns SEO data: 1. Immediately show the SEO teaser (scores + partial findings) 2. In parallel, call triggerCro() to start CRO analysis 3. Show "AI analysis in progress..." spinner in CRO section 4. When triggerCro() returns, update the teaser with CRO scores and partial CRO findings This means the user sees SEO results INSTANTLY and CRO results fill in 10-20 seconds later. The page doesn't block. COLOUR PALETTE (use CSS variables): --bg: #06090E (page background) --sf: #0D1117 (surface/cards) --sf2: #131921 (secondary surface) --cd: #171D28 (card background) --bd: #1C2535 (borders) --bl: #263042 (lighter borders) --tx: #E2E8F2 (text) --tm: #7B8CA2 (muted text) --td: #4A5771 (dim text) --br: #00E09E (brand green) --brd: #0A3D2C (brand dark) --brg: rgba(0,224,158,.08) (brand glow) --ac: #6C8EEF (accent blue) --r: #EF5350 (red/fail) --y: #F5C542 (yellow/warn) --g: #22C97A (green/pass) --cro: #F59E0B (CRO amber) --st: #635BFF (Stripe purple) FONTS: Plus Jakarta Sans (headings/body) + JetBrains Mono (scores/code) Load from Google Fonts CDN. CRITICAL: The evidence sections in findings should be COLLAPSIBLE. Default: collapsed. Click "View evidence (N)" to expand. This keeps the report clean but lets users drill into specifics. Study the JSX file carefully for the exact layout, spacing, border-radius values, and component structure. The Blade file should produce the same visual result. ======================================================================= 16. FRONTEND: Admin Views ======================================================================= All admin views use a shared layout (layout.blade.php) with: - Dark theme matching the main site - Sidebar navigation: Dashboard, Audits, Leads, Payments, Webhooks - Responsive (collapse sidebar on mobile) Dashboard: Revenue stats, conversion metrics, recent activity Audits list: Table with scores, status badges, search Audit detail: Full findings viewer with evidence Leads: Table with search, CSV export button, delete Payments: Table with status filter Webhooks: Table with status and errors Use the same colour palette as the main site. ======================================================================= 17. ROUTES (routes/web.php) ======================================================================= // Public Route::get('/', fn() => view('audit')); // API Route::prefix('api')->group(function () { Route::post('/audit', [AuditController::class, 'runAudit']); Route::get('/audit/{id}/trigger-cro', [AuditController::class, 'triggerCro']); Route::get('/audit/{id}/cro-status', [AuditController::class, 'croStatus']); Route::post('/lead', [AuditController::class, 'saveLead']); Route::post('/checkout', [AuditController::class, 'createCheckout']); Route::get('/verify-payment', [AuditController::class, 'verifyPayment']); Route::get('/audit/{id}', [AuditController::class, 'getAudit']); }); // Stripe webhook (no CSRF) Route::post('/webhook/stripe', [AuditController::class, 'stripeWebhook']) ->withoutMiddleware([\Illuminate\Foundation\Http\Middleware\VerifyCsrfToken::class]); // Admin (add auth middleware in production) Route::prefix('admin')->group(function () { Route::get('/', [AdminController::class, 'dashboard']); Route::get('/audits', [AdminController::class, 'audits']); Route::get('/audits/{id}', [AdminController::class, 'viewAudit']); Route::get('/leads', [AdminController::class, 'leads']); Route::get('/leads/export', [AdminController::class, 'exportLeads']); Route::delete('/leads/{id}', [AdminController::class, 'deleteLead']); Route::get('/payments', [AdminController::class, 'payments']); Route::get('/webhooks', [AdminController::class, 'webhooks']); }); ======================================================================= 18. CONFIG FILES ======================================================================= config/audit.php: return [ 'price' => env('AUDIT_PRICE', '14.99'), 'currency' => env('AUDIT_CURRENCY', 'gbp'), 'timeout' => env('AUDIT_TIMEOUT', 15), 'user_agent' => env('AUDIT_USER_AGENT', 'AuditMySite/1.0'), 'brand_name' => env('BRAND_NAME', 'Visionsharp'), 'brand_url' => env('BRAND_URL', 'https://www.visionsharp.co.uk'), 'contact_url' => env('CONTACT_URL', ''), ]; config/services.php (add to existing): 'stripe' => [ 'key' => env('STRIPE_KEY'), 'secret' => env('STRIPE_SECRET'), 'webhook_secret' => env('STRIPE_WEBHOOK_SECRET'), ], 'anthropic' => [ 'api_key' => env('ANTHROPIC_API_KEY'), 'model' => env('ANTHROPIC_MODEL', 'claude-haiku-4-5-20251001'), ], ======================================================================= 19. DEPLOYMENT NOTES ======================================================================= After building all files: 1. Run: composer install 2. Run: cp .env.example .env 3. Run: php artisan key:generate 4. Edit .env with real database credentials 5. Run: php artisan migrate 6. Test: php artisan serve 7. Visit http://localhost:8000 — enter a URL and verify SEO scan works 8. Configure Stripe keys in .env for payment testing 9. Configure Anthropic API key for CRO testing 10. Set up Stripe webhook pointing to https://yourdomain.com/webhook/stripe For cPanel: - Point the domain's document root to /public - Ensure PHP 8.1+ is selected - Ensure ext-dom, ext-mbstring, ext-curl are enabled - Set up a cron job for queue worker if using async CRO: * * * * * cd /path/to/auditmy-site && php artisan schedule:run ======================================================================= END OF INSTRUCTIONS =======================================================================