Follow me on LinkedIn - AI, GA4, BigQuery
n8n requires technical knowledge, but Google’s Opal, Base44, and Lovable work with plain English instructions. That’s the Achilles’ heel of platforms like Opal, Base44, and Lovable.

They work magically, until they don’t, and then good luck.

Platforms like Opal, Base44, and Lovable are built around natural-language orchestration, not explicit logic definition.

Opal, for instance, lets you build apps that chain prompts, models, and tools, all using simple natural language and visual editing.


That means there’s no visible code map of how your data moves or transforms; it’s hidden behind model interpretation.

The system might do the right thing 90% of the time, but when it misinterprets your intent, there’s no surface to debug.

Here’s the reality check:

You say, “Send a Slack message when a new lead books a demo in HubSpot.

The AI builds the automation instantly. Everything works fine for a while.

Then, one day, someone on your team renames the field in HubSpot from 'lead_booked' to 'demo_scheduled'.


Now the automation silently stops working.

There’s no error message, no log, no visible logic map because the “workflow” lives inside the model’s interpretation.

You try to fix it, but there’s no code to inspect. The tool only shows a vague “something went wrong” alert.

Same Scenario (Using n8n or Another Explicit Workflow Tool):

In n8n, you build the workflow explicitly:

  • A HubSpot trigger node watches for the field lead_booked = true.
  • A Slack node sends the message when the condition is met.

If HubSpot changes the field name, the workflow immediately throws a clear error, “Field ‘lead_booked’ not found.”

You can open the node, update it to 'demo_scheduled', and the workflow will work again.

Takeaway:

  • AI Builders (Opal, Base44, Lovable): Easy setup, no visibility, no control when logic breaks.
  • Explicit Builders (n8n, manual workflows): Require more setup but give full explainability and control.

In short:

“Natural language makes automation easy until you need to understand why it broke.”

And here’s the deeper problem:

Unless you can explain what you meant in operational terms, the exact data, logic, and dependencies, your text prompts are unlikely to work.

Even Base44’s documentation explicitly warns: “Be clear and specific… a little context goes a long way.

That’s just another way of saying: you need domain expertise.

In AI systems, explainability equals control.

The moment logic becomes opaque, users move from builders to bystanders.

Because you didn’t build the logic, you don’t control it. It’s as simple as that.

When something breaks, you’ve got no clue what’s going on. And good luck customising it beyond what the AI “thinks” you meant.

Even with my GA4 BigQuery Composer (a custom ChatGPT I developed for automating SQL generation for GA4 BigQuery), I always teach students the logic first so they can customise the output if needed.

Why?

Because when you don’t understand the logic, you don’t control the output.

The future isn’t “no-code AI.”

It’s explainable automation, where the AI builds the workflow, but you can still open the hood, inspect the logic, and change it.