DEVICE CHECK · NEGATIVE JIL · BUG REPORT
SCREEN TOO SMALL · STAND BY

TOO
COMPACT

this dossier wants a bigger field

×

A 33-bug dossier, a sticky sidebar, and 9 sections of root-cause analysis. Your phone politely declined.

PLEASE REOPEN ON LAPTOP · DESKTOP
JOININLIVE.AI · SAJITH · 07.05.2026
Engineering Review

JoinInLive — Live Event Bug Report

Code-level root-cause analysis after the 7 May 2026 broadcast
(Bayern vs PSG · UEFA Champions League · ESPN feed via FeedSync).

Branchdevelop Date07 May 2026 ModeRead-only review
01

Executive summary

4
Critical defects
Data loss · broken send flow · stale state
7
AI generation
Context · repetition · sponsor coverage
4
Sponsor integration
Render · isolation · visuals · sizing
2
Live operations
Clock drift · end-of-game refresh
7
UX & polish
Lower priority cosmetic / placement
9
External / scope
Not coding bugs — feature work
Single biggest finding

One root cause — fingerprintValue() referenced but never defined in AdminDashboard.tsx — silently breaks every counter, every panel update, and every state refresh that flows through the dashboard's polling layer. We believe this single defect explains a disproportionate number of "stale UI" symptoms during the broadcast.

Two architectural gaps
  • The live webhook ingest path (FeedIngestProcessor) sends only game / previous_type / message to the LLM, not the rich SPORT_CONTEXT / MATCH_TITLE / BUFF_LANGUAGE / FEED_EVENT_TEXT / CHAT_SECTION payload that the mock-feed agent uses. The model is generating buffs while half-blind to which match it is watching.
  • There is no awareness of the live match clock for external feeds. The "current minute" is computed from wall-clock multiplied by an acceleration factor designed for replaying recorded matches — explaining the ~25 minute drift and the impossible card-time options (76/86/96).
02

Critical defects

Critical

2.1 · Frontend dashboard counters and panels do not refresh after first load

Reported as: #20 viewer/response counts not updating · #21 end-of-game buff did not show · partial cause of stale-state symptoms across the dashboard.

Root cause

frontend/client/src/admin/AdminDashboard.tsx:328 and :335 call fingerprintValue(value) — a function that is not defined and not imported anywhere in the codebase. A grep across the full frontend tree returns zero definitions. The call is wrapped in smartSet, which is itself called inside Promise.allSettled callbacks, so the resulting ReferenceError is swallowed silently.

Net effect: every state setter that goes through smartSetsetDrafts, setScheduled, setProposals, setAnalytics, setFeedRunning, setFeedMinute, setEndedAt, setVenueLat, setTimelineBuffs — never runs after page load. The 4-second polling loop fires correctly and the API responses are correct, but the React state never updates.

Why this matters

It explains why viewer count and response count froze, why the "Ready to Send" panel never reflected new drafts, and likely contributed to the end-of-game buff not appearing.

Fix scope

Define the missing helper. The comment on lines 325–326 reads: "For arrays, uses length + first/last ID as a fast fingerprint instead of full JSON.stringify" — implying a simple length + first.id + last.id for arrays and JSON.stringify for primitives. One small function.

Critical

2.2 · Editing an AI proposal silently loses the edits on approve

Reported as: #22 "Editing an AI proposal then clicking Approve & Add to Deck does not retain the edits."

Root cause — two defects compounding

  1. FrontendAdminDashboard.tsx:1889. When the user opens EditCardModal and clicks "Approve & Add to Deck", the handler calls handleApprove(editCard.proposalId) directly. It does not call handleEditSave() first, so the edited content is never persisted to the proposal row before approval is requested.
  2. Backendbackend/app/api/viewer_api_routes.py:3823–3864 (the POST /api/events/{eventId}/feed/proposals/{proposalId}/approve route). The route does not accept any request body. It reads the proposal row from the database and copies proposed_content.content and proposed_content.display straight into the new draft buff. There is no path for edits to flow through.

Net effect: the user sees their edits in the modal, clicks Approve, the draft is created from the original LLM output, and the edits are lost without warning.

Fix scope

Either make the modal save edits before approving (single-line change, trivial), or have the approve endpoint accept an editedContent payload. Both are fine.

Critical

2.3 · Send button locked: "Cannot send: A buff is already live"

Reported as: #18 — admins could not press Send (Mike and Tom both affected).

Root cause

AdminDashboard.tsx:588–605handleSend() blocks send when activeBuff && !activeBuff.result. The check itself is correct logic. The problem is that activeBuff is passed as a prop from the parent, and the parent's view of the active buff is updated via the same smartSet / fingerprintValue polling layer that is currently broken (see 2.1).

When a buff was closed on the server, the dashboard never received the state update saying it was no longer active, so the send guard kept firing forever.

Fix scope

This is downstream of 2.1 — fix that and this should resolve. If it persists, have handleSend re-fetch the event's active buff state on demand instead of trusting the prop.

Critical

2.4 · Live match clock drifted ~25 minutes ahead of real time

Reported as: #19 "Game timeline approximately 25 minutes ahead of real time."

Root cause

backend/app/feed/agent.py:328–329 and backend/app/feed/mock_feed.py:91–97. The "current match minute" is computed as:

elapsed_minutes = (wall_clock_elapsed_ms / 60000) * speed_multiplier

For mock feeds this is correct — the multiplier compresses a recorded 90-minute match into 15 minutes of replay. For external/ESPN feeds, this same formula runs unchanged, with no mechanism to read the actual match minute from the incoming commentary timestamp. There is a datetime_utc field on every FeedIngestItem (backend/app/models/feed_ingest_item.py), but FeedIngestProcessor never uses it to derive a true match clock.

Fix scope

For external-feed events, derive current_minute from (latest_feed_item.datetime_utc - event.actual_start) instead of the simulated clock. ~10–20 lines.

03

AI buff generation defects

High

3.1 · Cricket-style buffs in a football match · "Manchester" hallucinations

Reported as: #1 cricket buffs during Bayern vs PSG · #2 Manchester references throughout · #4 factually wrong content (e.g. "scoring from a penalty" when it was a free-kick).

Root cause

backend/app/services/feed_ingest_processor.py:117–126.

For external feeds posting via webhook (FeedSync → JIL), the prompt sent to the LLM is the generic prompt at backend/app/prompts/feed_buff_gen/system_prompt.txt, populated with only three placeholders: GAME, PREVIOUS_TYPE, MESSAGE. There is no team list, no match title, no competition name, no score, no language token, no recent chat context.

The richer prompt at backend/app/prompts/feed_buff_gen/sports/system_propmt.txt — the one that includes SPORT_CONTEXT, MATCH_TITLE_SUFFIX, SUGGESTED_BUFF_TYPE, BINARY_OPTION_COUNT, BUFF_LANGUAGE, FEED_EVENT_TEXT, CHAT_SECTION — is reachable only from backend/app/feed/agent.py (lines 726, 766), which runs only for the bundled mock feeds when an admin clicks "Start Agent" in the UI.

A grep across the entire backend confirms load_sports_system_prompt_template and build_sports_feed_agent_user_prompt are called from exactly one file: agent.py. No webhook path touches them.

Compounding this: feed_ingest_processor.py:123 falls back to event.sport or "cricket". If event.sport is empty or unmapped, the prompt literally tells the LLM GAME: cricket on a football match. The generic system prompt also carries an embedded cricket example ("Virat Kohli scores the first boundary") that further primes the model.

With no anchoring facts and a cricket example in the prompt, the model fills the gap from its training distribution. "Manchester" is the most globally recognizable football brand and features heavily in UEFA fixtures during Champions League weeks — exactly the kind of plausible guess an under-specified prompt produces. The 75% confidence the model returned is itself a tell — the model is signalling uncertainty.

Fix scope

Either point FeedIngestProcessor._build_proposal_payload at load_sports_system_prompt_template and build_sports_feed_agent_user_prompt, populating SPORT_CONTEXT / MATCH_TITLE / language from the existing Event row; or extend the simple build_user_prompt to include team and competition fields. The data is already in the DB (event.name, event.teams, event.sport, event.venue_name, event.feed_id) — the ingest processor just doesn't read it.

Medium-High

3.2 · AI proposals are repetitive

Reported as: #3 "Most AI suggestions had very similar / repetitive content."

Root cause

agent.py and the prompts. The agent does not maintain any topic or question memory across proposals. It does pass PREVIOUS_TYPE (the buff type of the last proposal) into the user prompt, but it does not pass the previous question text, previous topic, or a summary of the last N proposals. The system prompt also contains no instruction to avoid repeating themes.

If the LLM landed on "Will X score next?" three minutes ago, nothing prevents it from generating that same question again.

Fix scope

Pass a short rolling history of the last 3–5 proposal titles into the user prompt and add an explicit "do not repeat themes from RECENT_BUFFS" instruction to the system prompt. Cheap fix with high return.

Medium

3.3 · Card-timing question with logically impossible options (76 / 86 / 96)

Reported as: #5.

Root cause

Same as 3.1 — the LLM has no idea what minute the match is currently at. With the rich prompt path it would know via SPORT_CONTEXT / MATCH_TITLE plus the live feed text, but on the webhook path it has neither the match clock nor the feed event timestamp. So when asked to invent options around a card event, the model picks numerically plausible values that happen to be in the future.

Fix scope

Same fix as 3.1. As a defensive layer, add a post-generation validator that drops proposals whose options reference minutes after the current minute when generated against a "past event" type (card, goal, foul).

Medium

3.4 · First-half buffs arriving in the second half

Reported as: #6.

Root cause

Combination of 2.4 (clock drift), 3.1 (no match-state context in prompt), and 3.6 (event bunching). The agent does not know which half the match is in; it just reacts to whatever feed event last arrived. If a feed event lagged or the cooldown bunched events together, a half-time-themed buff could land minutes after the second half kicked off.

Fix scope

Together with the prompt-context fix (3.1), include a MATCH_PHASE field (first_half, half_time, second_half, extra_time) derivable from the match clock, and instruct the LLM to bias buffs toward the current phase.

Medium-High

3.5 · 1–2 minute delay between feed event and buff arrival

Reported as: #7.

Root cause — three layers add up

  1. Polling intervalagent.py:370 sleeps 5 seconds between ticks.
  2. Cooldown — default cooldown_seconds=60 (agent.py:271). After a successful proposal, the next one is blocked for 60s.
  3. LLM latency — typically 3–10 seconds depending on provider/model.

Worst case: feed event arrives → up to 5s for next tick → up to 60s for cooldown → ~10s for LLM = 75–90 seconds before a buff appears. There is no expedited path for high-priority events (goals, red cards), and the cooldown is global, not per-priority.

Fix scope

(a) Drop cooldown to 20–30s, or (b) make cooldown event-priority-aware so goals and red cards bypass it, or (c) start LLM call optimistically during cooldown for high-priority events. Polling can also tighten to 2s without backend pressure.

Medium

3.6 · Bunching: long silence then 4 buffs at once

Reported as: #8.

Root cause

agent.py:475–536. During the cooldown window, feed events keep arriving and are persisted to the DB, but proposal generation is suppressed. When the cooldown expires, the next tick generates one proposal — but at the very same time, another tick fires (5s polling), and another. Because LLM calls are async and cooldown only resets on successful proposal (line 516), failures don't reset it but successes do — so once the LLM starts succeeding again after a hiccup, all the queued ticks fire their proposals nearly simultaneously, then sit idle until the next batch.

Fix scope

Add a small jitter to the cooldown reset, and gate concurrent proposal generation per-agent (only one in-flight at a time).

Gap

3.7 · No sponsor cards ever appear in AI proposals

Reported as: #10.

Root cause

agent.py:83. The ALLOWED_BUFF_TYPES tuple is ("quiz", "poll", "prediction", "rating", "binary", "info_card", "emoji_reaction"). Neither sponsor_card nor sponsor_poll is in the allowed list. Even if the LLM generated sponsor_card, _normalize_buff_type (lines 98–132) would coerce it to "poll" because it isn't whitelisted.

This is by design today — sponsor content is treated as admin-curated, not LLM-generated — but if the product expectation is that AI proposes sponsor placements at natural moments (half-time, sponsor-themed events), this is a feature gap, not a bug.

Fix scope

Either add the types to the whitelist and extend the prompt to explain when sponsor cards make sense, or document that sponsor placement is always manual. Treated as a gap pending product direction.

05

UX & visual polish

Smaller defects that affect operator confidence and presentation.

5.1

Error placement (#26)

"Cannot send: A buff is already live" appears far from the publish/send area, making the cause non-obvious. Move toast or inline error into the deck row near the affected buff.

5.2

NEW button placement (#27)

Empty-state message points to the "+ NEW" button in the top navbar, far from the user's eye. Place a contextual "+ Create" button inside the empty deck panel itself.

5.3

Registration not enabled at start (#28)

Event went live before registration was enabled in event settings. Default registration to enabled on draft → live transition, or show a pre-flight checklist before GO LIVE.

5.4

Field labels hard to read (#29)

Email and Name labels in the viewer onboarding flow have low contrast. CSS contrast pass on form labels in viewer auth screens.

5.5

Font contrast (#30)

White text fine, coloured fonts less so — particularly muted text (var(--t-text-muted), var(--t-text-dim)) on glass surfaces. WCAG AA audit pass on the light theme.

5.6

Default schedule '5:00' (#24) — not a bug

Per the operations team, this was an admin workaround for the broken Send button (#18) — admins schedule a buff for 1 second from now to bypass the locked send. Once #18 is fixed, the 5:00 default is acceptable.

06

Feature requests (not bugs)

Items reported alongside the bugs but reflecting missing functionality rather than coding defects. Listed for transparency so they can be triaged into the product backlog.

Workflow

6.1 · Allow scheduling directly from AI proposal cards (#25)

Currently AI proposals can only be approved or dismissed — they can't be scheduled directly. The intended flow is: approve → draft buff appears in Ready deck → schedule from there. Per the operations team this is not a bug but a desired feature: combine "Approve & Schedule" into one action so the operator doesn't have to bounce between panels.

Data

6.2 · Master feed aggregating multiple sources (#12)

No aggregator service combining ESPN, FeedSync, social commentary, and editorial inputs into a single normalized feed. Substantial product effort — separate adapters per source plus a merge/dedupe layer.

Editorial

6.3 · Publish our own events into the feed (#13)

Operators have no way to inject custom commentary lines (e.g. an editorial talking point) into the live feed for the AI to react to. Adding an admin "publish to feed" endpoint would let editors prime the AI with curated moments.

Pre-match

6.4 · Pre-match context and build-up (#14, #15, #16)

No mechanism for ingesting pre-match articles, talking points, social commentary, or starting line-ups before kickoff. AI proposals start cold at minute 0. A pre-match context loader (e.g. reading from FeedSync 60 minutes before kick-off, embedding team news / lineup / talking points into the prompt) would meaningfully improve early-match buff quality.

Engagement

6.5 · Fan-reaction / text-commentary alongside in-game feed (#17)

Today only the in-game feed informs proposals. Fan reactions are captured in-app but not used as additional prompt context. The richer prompt path (agent.py) does include a CHAT_SECTION placeholder, but the webhook ingest path does not. Fix here aligns with the prompt-context work in 3.1.

07

External data quality (not JIL coding issues)

7.1 · ESPN feed commentary too thin to drive useful buffs (#11)

The ESPN/FeedSync commentary lines are short and event-driven ("Goal scored", "Yellow card") without contextual depth. Function of the upstream provider — not something the JIL backend can fix in isolation. Mitigation lives at the prompt-context level (3.1) so the LLM can reason from the match metadata even when individual feed lines are sparse.

7.2 · Insufficient feed depth and frequency (#9)

Same root: upstream FeedSync polling is on a 30-second cadence and produces thin lines per event. Tightening this requires upstream work (faster polling, richer normalization), not a JIL code change.

08

Suggested fix priority

OrderItemWhy first
12.1 · fingerprintValue undefinedOne-line fix that unblocks 2.3 (send button), #20 (counts), and likely #21 (end-of-game refresh)
22.2 · Edits lost on approveData loss; trivial fix on either frontend or backend
33.1 · Missing prompt context for webhook ingestSingle biggest improvement to AI quality; addresses #1, #2, #4, partially #5, #6, #11
42.4 · Live match clock driftMakes the timeline panel meaningful for live matches
54.1 · Sponsor not rendering on PoMRevenue surface; small fix
64.2 · Sponsor cross-event leakBrand safety
73.5, 3.6 · Cooldown / bunchingImproves perceived liveness
83.2 · RepetitionCheap prompt-level fix
94.3, 4.4 · Sponsor visuals + image sizingCSS + upload validation work
10UX polish (5.x)Polish pass
09

File reference index

AreaFileLinesIssue
Frontend statefrontend/client/src/admin/AdminDashboard.tsx328, 335fingerprintValue undefined
Frontend statefrontend/client/src/admin/AdminDashboard.tsx588–605handleSend blocked by stale activeBuff prop
Frontend statefrontend/client/src/admin/AdminDashboard.tsx1889handleApprove does not save edits before approving
Frontend modalfrontend/client/src/admin/EditCardModal.tsx44–48"Approve & Add to Deck" path skips edit-save
Backend ingestbackend/app/services/feed_ingest_processor.py117–126Webhook ingest uses generic prompt, ignores match context
Backend ingestbackend/app/api/feed_commentary_integration_routes.py91–98New webhook also funnels through FeedIngestProcessor
Backend agentbackend/app/feed/agent.py83ALLOWED_BUFF_TYPES excludes sponsor types
Backend agentbackend/app/feed/agent.py271, 370, 475–536Cooldown, polling, bunching behaviour
Backend agentbackend/app/feed/mock_feed.py91–97Wall-clock × multiplier instead of real match clock
Backend approvebackend/app/api/viewer_api_routes.py3823–3864Approve endpoint does not accept edited content
Backend serializerbackend/app/services/api_serializers.py117–132Sponsor only emitted when eagerly loaded; no isolation guard
Frontend sponsorfrontend/client/src/components/SponsorCardRenderer.css32Logo 32px × 32px — too small
Frontend mediafrontend/client/src/admin/MediaGallery.tsx45–129No image dimension validation or recommended sizes
Backend promptbackend/app/prompts/feed_buff_gen/system_prompt.txt87–88Generic prompt with cricket example primes wrong sport
Backend promptbackend/app/prompts/feed_buff_gen/sports/system_propmt.txt(filename typo)Rich prompt — only used by agent.py, not by webhook ingest