Follow me on LinkedIn - AI, GA4, BigQuery
When writing prompts, always ask yourself this question: What am I leaving out for AI to assume?

This will help you refine your prompt even when you are not a subject matter expert.

I provided ChatGPT with a list of four prompts and asked what its default assumptions would be when generating output.

Here’s what it came back with.

Keep in mind, these AI default assumptions may differ in your case; they depend on your chat history, model version, and how the AI interprets missing details.

1. GA4 Data (UI-level).

Prompt:

Show me users who converted last month.

AI default assumptions (if you don’t clarify):

Date range → Calendar month (e.g., Sep 1–30), not rolling 30 days.

Users → total users, not active users or new users.

Conversions → Only purchase events (ignoring other marked conversions like generate_lead).

Scope → User-level aggregation.

Formula → COUNT(users WHERE event_name = 'purchase').

Why it feels wild: AI interprets conversion as purchase only, technically valid, but if your property is lead-gen or subscription-based, your true conversions vanish.

2. GA4 Data in BigQuery.

Prompt:

Give me revenue by country.

AI default assumptions (if you don’t clarify):

Date range → Entire dataset (full table scan).

Metric → SUM(event_params.value.int_value) where event_name = 'purchase'.

Dimensions → geo.country.

Event scope → Only purchase events with a value parameter.

Currency → Whatever currency the property records (no normalization).

Why it feels wild: AI assumes you mean ecommerce purchase revenue, ignoring ads revenue or item-level revenue. Still valid revenue but not what you probably meant.

3. System Prompts (AI Agent Personality).

System prompt:

You are concise and precise.

AI default assumptions (if you don’t clarify):

Tone → Minimal, factual, technical.

Style → 2–3 sentences max, no softening language.

Content → Exact GA4 field names (event_name, session_start, geo.country).

Formatting → Plain text, no narrative.

Why it feels wild: AI responds in a robotic, field-heavy way. Accurate, but stripped of any human context or usability.

4. Generative AI (Video Generation).

Following is my very first AI-generated human.

I didn't specify the girl's age in my prompt, only mentioning "young blond," which AI interpreted as a child. Also, the girl is not supposed to say anything but smile. But since I did not explicitly mention that in my prompt, she ended up speaking gibberish.


That’s a textbook example of AI default assumptions kicking in when details are missing.

AI Default Assumptions (if you don’t clarify):

Age → Defaults to child (because “young blond” in training data often maps to kids).

Behavior → Assumes she is an active character, not passive.

Dialogue → Assumes she should speak, because in cinematic training data “smiling characters” often also deliver lines.

Tone → Defaults to playful or innocent (again, a child trope).


Generative AI (images, video, sound, music) is where default assumptions get the wildest, because those models are trained on huge cultural/visual/audio priors.

If you don’t control every detail (age, lighting, sound, motion, expressions, storyline...), AI will default to clichés, often surreal, sometimes weird or creepy.

Without domain expertise, you don’t even know what’s missing.

If you don’t already know GA4 has metrics vs. dimensions vs. scopes, you’d never think to say “show me your assumed date range, metrics, dimensions, and scope.” You’d ask the surface-level question.

This is why AI outputs can feel “wild”:

Experts spot the hidden assumptions right away (e.g., “Oh, it defaulted to purchase-only conversions”).

Non-experts don’t even realize assumptions were made, so they take the output as correct.

So, what can you do if you lack domain expertise?

Two methods to list out AI Assumptions when you lack domain expertise: