General FAQs

For FAQs around our products and pricing, see our Product & Pricing FAQs.

How could my app be enhanced with AI?

We think the most exciting opportunities are in natural language interfaces — giving your users a smooth, conversational way to interact with your app instead of clunky menus and clicks. With AI, the app can also remember past actions and preferences, so the experience feels natural, personal, and efficient.

But that's just the start - AI can also help you with things like

  • Simplifying complex decisions - for example, guiding customers through product options, specifications, or service choices so they feel confident buying.
  • Engaging customers instantly - with AI handling first contact, answering common questions, or keeping prospects warm until a sales rep takes over.
  • Streamlining internal workflows - from drafting reports to summarizing data, AI can take away repetitive tasks so your team focuses on higher-value work.

In short: AI can make your app feel more natural to use, more helpful to your users, and more valuable to your business.

What technology do you use?

On the backend, we're building on top of Microsoft's Semantic Kernel technology to build flexible, secure, and observable AI workflows. This foundation is enterprise-ready and designed to evolve as the technology advances. Deployments are typically hosted in Azure, though we can also support on-premises setups if your security policies require it.

The frontend technology is flexible. We can support virtually any frontend framework, though we move fastest in modern stacks such as React, Angular, or Blazor (for a full .NET solution).

Which model provider do you use (OpenAI, Azure OpenAI, Anthropic, open-source)?

We're model-agnostic. By default we use Azure OpenAI for its enterprise-grade security and controls, but we can integrate with other providers - including OpenAI, Anthropic, Google, or open-source models like Llama - when they're a better fit for your needs around data, cost, or policy.

Is it safe to integrate AI features into my app?

It's a very fair question - and the short answer is yes, when done carefully. Letting an AI loose on production data without safeguards introduces real risks, which is why we've built our approach around safety and control.

For read-only use cases (like searching or summarizing data), the risks are low - the worst outcome might be a confusing or unhelpful answer. But for write operations (like adding or editing records), we take extra care.

Gibbonix is designed to keep you in control:

  • Deliberate design - our MCP endpoints (controlled connection points into your app) are built to expose only carefully chosen, limited modifications, keeping the AI's "surface area" as small as possible.
  • Explicit confirmations - the system always asks for user approval before making state changes.
  • Compensating actions & rollbacks - if the AI makes a mistake, we provide tools to correct it quickly and safely.
  • Enterprise-grade guardrails - our workflows are built on top of Microsoft's Semantic Kernel, giving us observability and auditability at every step.

In short: you get the power of natural language interfaces without the fear of losing control over your app's data.

Will AI replace human decision-making in our app?

No. AI is there to assist, not replace. We design for human-in-the-loop by default: the AI can suggest, draft, or automate routine steps, but high-impact or irreversible actions always require explicit user confirmation. You and your users stay in control, with AI working as your co-pilot.

Will our data be used to train public models?

No. From the Gibbonix side, your prompts and outputs are never used to train public models. On the provider side, all major enterprise LLMs (Azure OpenAI, OpenAI, Anthropic, Google, and open-source options) now default to keeping your data out of model training - and we configure them carefully to ensure your data stays private.