There's a step in Esri's new agentic apps documentation that looks minor and is easy to skip. It's called "Generate Embeddings." If you skip it, the chat panel either returns nothing or hallucinates — it can't connect a question like "show me shelters near flooding" to the actual layers in your map. This article is about what that step does, why it's required, and what the full pattern looks like once it's in place.
The Problem an LLM Has With Your Map
A web map is a JSON document. It contains layer references, drawing rules, and metadata. An LLM can read JSON. What it cannot do, without help, is reliably map a natural language question to the right layer and the right fields inside that layer.
Your map might have a layer called ARC_Shelter_2024_v3_FINAL. The user asks: "Where are open shelters?" Those two things are connected — but only if something has already indexed that the word "shelter" relates to that layer, and that the field STATUS with value "OPEN" is how you filter for open ones.
That indexing is what embeddings do. Before your map can answer questions, it needs a pre-built semantic index: a compact vector representation of every layer name, alias, description, and field metadata in the map. The LLM doesn't query the map directly. It queries the embeddings index, identifies the relevant layer and fields, then runs the actual spatial query.
How to Generate Embeddings
The step is in AGOL, not in code. It runs once per web map, and you re-run it whenever the map's layer structure changes.
- Open the web map item page in ArcGIS Online
- Go to item settings (the Settings tab, or the three-dot menu depending on your AGOL version)
- Find "Manage AI vector embeddings"
- Click Generate
It takes about a minute. The embeddings are stored as a resource on the web map item — not in the layers themselves, not in your code. When your agentic app initializes the map, it loads those embeddings automatically. The LLM then has a grounding layer to work from.
Without this step, the AI has no index to query. It's flying blind over your map and guessing at layer names.
The Full Pipeline
Once embeddings exist, the pattern for a conversational query is:
- User types a natural language question
- The AI agent converts the question into an embedding (a vector)
- That vector is compared against the map's stored embeddings index
- The closest matches identify the relevant layer and fields
- The agent constructs a spatial or attribute query against those layers
- Results return as features, counts, or a summary
Step 4 is where the layer name problem is solved. The embeddings index knows that "shelter," "emergency housing," and "temporary lodging" all map to the same layer — even though the layer is named something else entirely. Semantic similarity, not keyword matching.
This is meaningfully different from a search box. A search box does string matching. An agentic map does intent resolution. You can ask "Are there any areas with both flooding risk and open shelters?" and the agent can figure out that answering that requires two layers, a spatial intersection, and a filter condition — without you specifying any of that.
Where This Is Headed
Esri is building this into the SDK at the framework level. SDK 5.0 (currently in beta) includes an AiAssistant widget and supporting tools specifically for this agentic pattern. The widget handles the chat UI, the embedding lookups, the query construction, and the response formatting. You configure it with your map and your LLM endpoint; it handles the pipeline.
The premise of the pattern: a field user can type "show me open shelters near this flooding area" and the map responds — filtering, zooming, querying — without the user knowing which layers are involved or touching a single widget. That's the pipeline working as described above, against real operational layers, with a user who has no GIS training.
What This Changes
The existing SDK pattern — describe what you want, get working code, deploy in a day — unlocks the SDK for GIS professionals who don't write code. The agentic pattern is the next step: it unlocks the map itself for users who don't know GIS.
The GIS professional's job shifts. Instead of building UIs that guide non-technical users through filter panels and query builders, you're building maps that can be interrogated. The expertise you bring is in the data model: understanding which layers answer which questions, how to name and describe fields so the embeddings index captures the right semantic relationships, what queries are possible against your data.
The configuration work moves from widget interactions to metadata quality. Well-named fields, good layer descriptions, accurate summaries — these become load-bearing. The LLM's ability to correctly route a question to the right layer depends entirely on whether the index has enough signal to work from.
The AI Is Esri's, Not Yours
The arcgis-assistant component is part of Esri AI — their proprietary platform announced at DevSummit 2026. There is no API key to manage client-side. The intelligence runs through Esri's own hosted infrastructure (almost certainly Azure-based — Esri's stack runs on Azure, and their deep Microsoft partnership aligns with Azure OpenAI, though Esri hasn't confirmed the model publicly).
What actually happens on a query:
- The user's question + the relevant schema from the embeddings index get sent to Esri's AI service
- Esri's backend resolves the query intent against your map's layer and field structure
- The response returns as map actions (zoom, filter, query) or a text answer
This matters for deployment. The capability is gated on three things: SDK 5.0, web map embeddings being generated, and an AGOL org admin enabling "AI Assistants" in org settings. If the org setting isn't on, the widget does nothing regardless of what's in your code.
This is why a GISCorps or personal org is the right place to experiment — you control the admin settings.
Practical Notes
- Embeddings must be regenerated after any layer structure change (new layers added, layers removed, field renames)
- The feature is tied to SDK 5.0 / ArcGIS Maps SDK next-gen — not available in 4.x apps
- Org admin must enable "AI Assistants" in AGOL org settings before the widget activates
- Testing: if a question routes to the wrong layer, check the layer alias and description in the AGOL item first — the embeddings reflect what's there