Our entire product, including the underlying language model can be deployed on customer cloud to ensure no sensitive data leaves customer infrastructure.

Relvy will provide customers with:

  • Docker images needed to bring up Relvy’s servers
  • Model checkpoints for serving the underlying LLM
  • Instructions for installation and configuration
  • Engineering support to ensure Relvy is running smoothly.

Of course, this does however require the customer to:

  • Bring up and pay for the necessary compute infrastructure.
  • Spend engineering bandwidth on installation and maintenance of Relvy’s servers.
  • Set up local apps for Slack and other integrations where necessary.

Please see Hardware Requirements, or reach out to us to learn more.