For FAQs around our products and pricing, see our Product & Pricing FAQs.
We think the most exciting opportunities are in natural language interfaces — giving your users a smooth, conversational way to interact with your app instead of clunky menus and clicks. With AI, the app can also remember past actions and preferences, so the experience feels natural, personal, and efficient.
But that's just the start - AI can also help you with things like
In short: AI can make your app feel more natural to use, more helpful to your users, and more valuable to your business.
On the backend, we're building on top of Microsoft's Semantic Kernel technology to build flexible, secure, and observable AI workflows. This foundation is enterprise-ready and designed to evolve as the technology advances. Deployments are typically hosted in Azure, though we can also support on-premises setups if your security policies require it.
The frontend technology is flexible. We can support virtually any frontend framework, though we move fastest in modern stacks such as React, Angular, or Blazor (for a full .NET solution).
We're model-agnostic. By default we use Azure OpenAI for its enterprise-grade security and controls, but we can integrate with other providers - including OpenAI, Anthropic, Google, or open-source models like Llama - when they're a better fit for your needs around data, cost, or policy.
It's a very fair question - and the short answer is yes, when done carefully. Letting an AI loose on production data without safeguards introduces real risks, which is why we've built our approach around safety and control.
For read-only use cases (like searching or summarizing data), the risks are low - the worst outcome might be a confusing or unhelpful answer. But for write operations (like adding or editing records), we take extra care.
Gibbonix is designed to keep you in control:
In short: you get the power of natural language interfaces without the fear of losing control over your app's data.
No. AI is there to assist, not replace. We design for human-in-the-loop by default: the AI can suggest, draft, or automate routine steps, but high-impact or irreversible actions always require explicit user confirmation. You and your users stay in control, with AI working as your co-pilot.
No. From the Gibbonix side, your prompts and outputs are never used to train public models. On the provider side, all major enterprise LLMs (Azure OpenAI, OpenAI, Anthropic, Google, and open-source options) now default to keeping your data out of model training - and we configure them carefully to ensure your data stays private.