SCIT IQ turns your company's documents — SOPs, compliance manuals, contracts, product specs — into a conversational knowledge base powered by models running on SCIT-managed infrastructure or deployed on customer-hosted infrastructure. In Cloud and Business, processing runs on SCIT-managed infrastructure. In Private, processing can run inside your network, including air-gapped deployments. In every tier, your data is not sent to third-party AI providers for inference.
"We don't train on your data" matters, but it is not the same as keeping inference in-house. If a product answers questions by calling a third-party model API, your content still leaves your environment for that inference step.
In API-based AI products, document excerpts and prompts are transmitted to the model provider for inference. TLS protects the connection, but the provider still processes the request.
Retention policies vary by provider and endpoint. OpenAI documents 30-day retention on many API endpoints by default, with retention controls for qualifying organizations on eligible endpoints. Anthropic says API inputs and outputs are deleted within 30 days by default, with zero data retention arrangements available for eligible APIs and approved enterprise customers. Safety- and abuse-related exceptions may still apply in some cases.
Ordinary retention windows are not the only issue: legal process can require additional preservation. In 2025, OpenAI said a court order temporarily required broader preservation of some consumer and API data before that obligation ended on September 26, 2025. The point for compliance teams is simple: contractual defaults and legal holds are not the same thing.
Using a third-party model API can add another vendor to your data flow and another layer of privacy, legal, and compliance review.
SCIT IQ Cloud and Business inference runs on SCIT-managed infrastructure and your data is not sent to third-party AI providers. SCIT IQ Private inference can run inside your network, including air-gapped environments where required.
SCIT IQ combines enterprise-grade document retrieval with private model inference. The system retrieves relevant passages, sends them to a model running on SCIT-managed infrastructure or deployed inside your network, and returns answers with citations — without sending your data to third-party AI providers for inference.
Upload files directly, connect cloud storage via OAuth, or use 40+ connectors including Google Drive, SharePoint, Confluence, Slack, Salesforce, GitHub, Notion, and email. For supported sources, permissions can be synced from the source system so users only see answers they are allowed to access.
Documents are chunked, embedded, and indexed using hybrid search — combining semantic understanding with keyword precision. When a question is asked, the most relevant passages are retrieved before the model ever sees them.
Retrieved passages are sent to an open-source LLM running on SCIT-managed infrastructure or, in Private deployments, inside your network on customer-hosted infrastructure. The model reads the passages and generates a cited answer. No API call to third-party AI providers is required for inference.
Answers are returned with citations linking back to the specific document passages they were derived from. Users can verify every claim. Unanswered questions are logged as knowledge gaps for your team to fill.
"We don't train on your data" is only one control. The bigger architectural question is whether answering a question requires sending your content to a third-party model provider.
| Category | OpenAI API | Anthropic API | SCIT IQ Cloud / Business | SCIT IQ Private |
|---|---|---|---|---|
| Third-party model provider in inference path | Yes | Yes | No | No |
| Customer data hosted outside customer network | Yes | Yes | Yes (hosted by SCIT) | No |
| Default provider retention | 30 days on many endpoints by default; varies by endpoint/configuration | 30 days for API inputs/outputs by default | Per SCIT policy | Per customer environment |
| Retention-control option | Available for qualifying organizations on eligible endpoints | Available for approved enterprise customers on eligible APIs | n/a | n/a |
| Sends data to OpenAI or Anthropic | Yes | Yes | No | No |
| Data stays inside your own network | No | No | No | Yes |
| Privacy guarantee type | Contractual / product policy | Contractual / product policy | Architectural: no OpenAI/Anthropic in path | Architectural + customer-controlled network boundary |
"We don't train on your data" is not the whole privacy story. The more important question is whether inference requires sending your content to a third-party model provider. SCIT removes third-parties from that path. Only SCIT IQ Private keeps the workload inside your own network.
Cloud and Business run on SCIT-managed infrastructure, and your data is not sent to third-party AI providers for inference. Private can run inside your network, with air-gapped deployment available where required.
Includes up to 20 users, shared SCIT-hosted environment, standard onboarding, and standard support.
Includes up to 50 users, dedicated hosted environment, admin controls, and priority support.
Dedicated private deployment for customers that want stronger infrastructure isolation and private-model hosting options.
SCIT IQ isn't just for internal teams. Deploy a white-labeled chat widget on your website, customer portal, or support pages. Your customers ask questions. Your documents provide the answers. Your brand gets the credit.
Teams ask questions about SOPs, compliance procedures, HR policies, product specs. Answers are drawn from your document library with access controls mirroring existing permissions. The AI sees only what each user is authorized to see.
Embed a branded chat widget on your public-facing properties. Customers ask product questions, navigate documentation, and resolve support queries — all answered by AI reading your published docs. One line of JavaScript. Your brand. Your content.
Generic chatbots don't understand HIPAA document structures, contract clause patterns, or CMMC compliance terminology. SCIT IQ's vertical agents are pre-configured with domain-appropriate retrieval strategies, prompts, and output formatting.
Clinical protocol lookup, formulary questions, and document Q&A for HIPAA-sensitive environments. Supports customer-hosted and air-gapped deployment where required. Can integrate with SCIT compliance workflows for organizations that need additional governance.
Contract clause retrieval, legal research support, and firm knowledge management. Designed for legal-document workflows and configurable for sensitive environments. Available through Parassist integration.
Compliance documentation, regulatory Q&A, and internal policy lookup. Deployment and access controls can be configured for stricter hosting and governance requirements. Can integrate with SCIT's compliance engine for standards-aware workflows.
Sensitive-document Q&A for government and defense environments. Supports customer-hosted and air-gapped deployment options for teams that cannot use vendor-hosted inference. Can integrate with SCIT governance workflows for CMMC- and CJIS-related programs.
Private AI and governance solve different problems. SCIT IQ can be integrated with SCIT compliance workflows and case-management context where customers need both retrieval and governance in the same system.
SCIT IQ can be integrated with the SCIT compliance engine. When configured, the system can pair document retrieval with standards-aware workflow logic to surface relevant controls, references, or exceptions.
For healthcare, laboratory, and legal environments running Caseore, SCIT IQ can be integrated with case context so answers reflect both documents and workflow state, subject to customer configuration and permissions.
No sales deck. No six-month procurement cycle. Tell us what documents your organization runs on and we'll show you SCIT IQ answering questions from them without sending your data to third-party AI providers. If your team requires customer-hosted or air-gapped inference, we can scope a Private deployment.
Start the Conversation
Pilots are scoped for regulated organizations. Most teams see their first AI-generated answers from their own documents within one business day of pilot kickoff.