Secure, reliable & compliant API Gateway

PROBLEM: A 5,000 person, multi-national publicly traded financial services company, wanted to be able to use LLMs in production, but faced significant technical and regulatory barriers to doing so. From a regulatory perspective, they needed strong assurances that their usage of LLMs would be compliant with  legal requirements in the US, UK and EU and from a business perspective they needed to know they had the monitoring in place required to manage API costs and usage, as well as to ensure their systems were robust to outages in the individual foundation model providers’ APIs (like OpenAI).

SOLUTION: Credal gives them a secure, API gateway that has been certified under the EU-US Data Privacy Framework to comply with European GDPR (unlike the foundation model APIs like OpenAI), as well as complying with CCPA, the EU AI act, and other regulatory requirements. Performance, cost, and security logging is all handled automatically and can be integrated with existing third party performance, cost monitoring and audit logging solutions like DataDog,  CloudZero, Splunk etc. In addition, retries and failovers can all be handled automatically

Impact: Much smoother responses to RFPs and infosec questionnaires, saving iteration time and disrupted work, and saving $25k in Responsive contract costs.

Support every model, but manage costs, security, and performance monitoring in one place

Credal’s API supports multiple models, but allows you to track security, cost and compliance in a single place. Our architecture also allows us to interoperate seamlessly with open source third party tools like LiteLLM, which let you to make calls to other LLM providers like Anthropic, using the OpenAI API syntax

Simple, backwards compatible APIs

Credal’s APIs are extremely simple, and even Credal can even be used through the first party LLM-provider client libraries.

Read the Docs

Automatic failover and retries

LLM Foundation Model Provider API's have been notoriously unreliable for production usecases, and so Credal implemented optional automatic failover to Azure OpenAI and AWS Bedrock (for Claude), and exponential back-off, decreasing the number of failed, well-formed requests by >95%

Interoperable from the ground up

At Credal, we believe  observability for your AI stack should ultimately live alongside observability for your Software more generally, and so Credal’s automated logging can be trivially integrating with existing solutions for monitoring performance, cloud spend, and security

Read the Docs

Building blocks towards secure AI apps

Credal gives you everything you need to supercharge your business using generative AI, securely.

Ready to dive in?

Get Started