AI on existing software is not the same problem as building AI software
Integrating AI into existing business software is a different category of work than building an AI-first product. The constraints are sharper: you cannot rebuild the data model, the users have established workflows, and the system has to remain reliable through the integration. The patterns that work look different from greenfield AI work.
Pattern 1: AI in the read path
The safest first integration is putting AI on top of read queries. Summarize a long ticket. Generate an executive summary of recent activity. Surface anomalies in a list. The user reads, the AI explains. No writes, no state changes, no risk of corruption. This is the right first move for almost any existing system because the failure mode is just a bad summary, not a wrong action.
Pattern 2: AI suggestions, human writes
The second pattern is AI-suggested actions that a human approves before they commit. Suggested email replies, suggested CRM field updates, suggested categorizations, suggested next steps in a workflow. The human stays in the loop. Adoption is good because users feel in control, and the failure mode is rejected suggestions rather than wrong data.
Pattern 3: AI for triage and routing
Most business systems have a routing problem: which support ticket goes to which team, which sales lead is high-priority, which alert deserves immediate response. AI classifiers do this well and the consequences of a misclassification are usually reversible. The integration is also clean: classifier runs on inbound items, sets a tag or priority, the existing workflow proceeds.
Pattern 4: AI agents in defined sandboxes
For agents that take actions, define a sandbox where they can operate without risk. Generating a draft document. Filling a quote template. Creating a scheduled task that requires human confirmation. The sandbox makes the failure mode visible: if the agent does the wrong thing, it shows up in a draft that gets reviewed, not in a production write.
Pattern 5: AI as a separate microservice
Keep the AI layer as a separate service with its own deployment, observability, and rate limits. Do not embed it in the main application's request path. This matters because AI services have different failure modes (timeouts, hallucinations, version drift) than ordinary application code, and you want to isolate those failures from the rest of the system.
The failure modes we see most often
First, hallucination on the write path. A team puts AI on field updates and discovers six weeks later that 5% of records have plausible but wrong data. Always require human approval for AI writes in the first six months.
Second, prompt injection through user content. Anything that takes user content and passes it to an LLM is vulnerable to prompt injection unless you treat user content as untrusted strictly. We isolate user content in clear delimiters and use system prompts that reject embedded instructions.
Third, observability gap. The AI service runs without proper logging because the team is moving fast. When something goes wrong, you cannot diagnose it. Always log the full prompt and response (with PII redaction) from day one.
Fourth, version drift. The model behavior shifts between releases. Pin model versions explicitly, run regression tests on every model upgrade, and use a hold-back evaluation set that AI providers cannot retrain on.
Fifth, cost surprises. The AI feature ships, usage scales, and the monthly bill is 5x the budget. Set hard rate limits per user and per organization, monitor token usage daily, and design the feature so a runaway cost is visible within hours not weeks.
What we recommend for the first integration
Start with read-path AI. Pick one screen where a summary or extraction adds clear value. Build it as a separate service with full observability. Ship it to a beta group. Measure adoption and accuracy. Only after that is working should you consider AI-suggested actions or AI writes.
The goal of the first integration is not the most impressive AI demo. The goal is to learn how AI behaves in your specific data and workflow, on real users, without putting the existing system at risk.



