← All writing
March 29, 2026
GIS
ArcGISSDKAIClaudeExperience BuilderDisaster Response
See live demo →
gis

What I Learned Past the EB Ceiling

I use Experience Builder every day for operational GIS work. It's genuinely powerful — maps, dashboards, lists, filters, all wired together without writing a line of code. I'm not dismissing it. For most operational GIS needs, it's the right tool.

But EB is a consumer of the ArcGIS JavaScript SDK. It uses maybe 20% of what the SDK exposes — the parts Esri decided were safe to put behind a configuration panel. The other 80% — dynamic rendering, GPU effects, custom hit-testing, real-time AI integration — only unlocks when you write SDK code directly.

I didn't fully understand this until I started building outside EB. The difference isn't incremental. It's categorical. EB lets you configure a map. The SDK lets you build anything that runs in a browser.


Technique 1 — Changing What the Map Shows In Real Time

In EB, you pick a renderer once. Maybe you choose unique values on a status field, or class breaks on a count field. That choice is frozen into the app configuration. If an EOC commander needs to see the same data three ways in ten minutes — by status, by occupancy, by capacity — they can't. Or they switch to three different maps.

With the SDK, the renderer is just a JavaScript object. Change the object, change the map. No page reload. No server call. The new visual encodes from whatever is already in the browser — the data hasn't moved.

This matters operationally more than I expected. Shelter managers during a hurricane don't want three maps. They want one map that answers three questions. Which shelters are open? Which ones are getting full? Which ones still have capacity? The answer is the same layer, rendered three different ways.

What this enables:

  • Switch between unique-value (status), class-breaks (occupancy), and proportional-size (capacity) renderers on demand
  • Apply WebGL effects — bloom, drop shadow, blur — directly on the GPU
  • Dim non-selected features, highlight critical ones, all without a server round-trip
  • Trigger renderer changes from any UI event — slider, dropdown, button, timer
// Swap renderer based on what the EOC commander needs to see
// One line. No server call. Instant.

layer.renderer = {
  type: "class-breaks",
  field: "OCCUPANCY_PCT",
  classBreakInfos: [
    { minValue: 0,  maxValue: 50,  symbol: { type: "simple-fill", color: "#2d5a27" } }, // green
    { minValue: 50, maxValue: 80,  symbol: { type: "simple-fill", color: "#b8860b" } }, // amber
    { minValue: 80, maxValue: 100, symbol: { type: "simple-fill", color: "#c41e3a" } }  // red
  ]
};

This is the technique that has zero compliance friction. Everything stays in the browser, on your network. No data leaves. Build this today.


Technique 2 — From Map Click to AI Intelligence

The SDK's view.hitTest() lets you intercept any click on the map before ESRI's popup opens. That's interesting in itself — it means you control what happens on click, not Esri. But the more important part is what you do next.

HitTest gives you the raw feature attributes. Every field, every value, for the feature under the cursor. Those attributes are structured data. Structured data is exactly what language models are good at reasoning about.

The chain is: click event captures attributes, attributes plus operational context go to the Claude API as a structured prompt, Claude returns a situation assessment with action priorities and coordination notes, and that streams back into a sidebar panel. Under three seconds from click to intelligence.

What makes this useful rather than just clever is the context layer. Claude doesn't just get the clicked shelter's attributes. It gets the status of all other shelters in the network, the disaster type, and operational flags like ADA access and pet-friendly status that affect routing decisions. That's the difference between "here is this shelter's data" and "here is what you should do about this shelter right now."

What Claude gets to reason with:

  • The clicked shelter's full attribute set — every field, not just the renderer fields
  • The status of all other shelters in the network, so Claude can reason about routing and capacity distribution
  • The disaster type context — hurricane vs wildfire vs flood changes urgency and operational priorities
  • Operational flags like ADA accessibility and pet-friendly status that affect which shelters to recommend
// The moment a user clicks a shelter dot — ESRI popup never opens
view.on("click", async (evt) => {
  const response = await view.hitTest(evt);
  const hit = response.results.find(r => r.graphic?.layer === shelterLayer);

  if (hit) {
    const attrs = hit.graphic.attributes;

    // These attributes become the Claude prompt
    const analysis = await callClaude({
      shelter:      attrs.NAME,
      status:       attrs.STATUS,
      occupancy:    attrs.OCCUPANCY_PCT,
      capacity:     attrs.CAPACITY,
      disasterType: currentContext,       // set by operator
      networkState: getOtherShelters()    // full network awareness
    });

    // Display streaming response in sidebar — no page reload
    renderAnalysis(analysis);
  }
});

The compliance question — be honest about this. Technique 2 sends data to an external API. For operational data during active disasters, that requires IT and data governance approval — which at large orgs can take months. For your own learning, demos, and LinkedIn articles: use synthetic data and label it clearly. The architecture is real. The pathway to production requires organizational buy-in.

The middle path: run a local LLM via Ollama. Same intelligence chain, zero data leaving your network.


These Are Not the Same Thing

Both techniques use the same SDK. Both require writing code outside EB. But they solve fundamentally different problems.

| | Technique 1 | Technique 2 | |---|---|---| | What it does | Changes how data looks | Reasons about what data means | | Data stays local | Yes — always | No — sends to external API | | Compliance friction | Zero | Requires IT approval | | Output | Better visualization | Actionable intelligence | | ESRI equivalent | Nothing in EB | Nothing in ESRI ecosystem | | Build today | Yes | With synthetic data only |

I learned both of these in a single session. My head got full fast — which is a sign you're actually learning something. The key insight: Technique 1 is about presenting data better. Technique 2 is about understanding data better. Both use the same SDK. Both are invisible to most GIS professionals. That's the gap worth closing.


The 80% Nobody Is Using

Most GIS professionals are EB users — not SDK developers. That's fine. EB is powerful, well-documented, and the right tool for a lot of work. There's no reason to learn the SDK just to learn it.

But the organizations doing the most important work — disaster response, humanitarian operations, emergency management — need tools that think, not just display. They need maps that adapt to the situation, not maps that were configured for a hypothetical situation months ago. The gap between "what EB can do" and "what these organizations actually need" is exactly where the SDK lives.

You don't have to master all of it at once. Learn one technique. Build one tool. Write about what you learned. The GIS + AI space is early enough that documenting what actually works, honestly and specifically, is itself a contribution.

"Experience Builder is a consumer of the SDK. When you write SDK code directly, you get the other 80%."


Next: Stop Hitting the Wall — ArcGIS SDK + AI