Security

Your code never
leaves your machine

Security is not a feature we added. It is how the architecture works. Source code stays local, API keys stay in the OS keychain, and PII gets redacted before any LLM call.

Non-negotiable

6 security rules we never break

These rules are enforced at the code level. They cannot be overridden by configuration, user input, or admin settings.

01
API keys in OS keychain only
All credentials are stored in the operating system's secure keychain (PasswordSafe on IntelliJ, encrypted DB fields on the backend). Never in settings XML files. Never in log output.
02
Source code never sent to backend
All LLM calls go direct from the plugin running in your IDE to the provider API endpoint using your own key. Our backend only receives metadata -- status updates, reports, and flow unit lifecycle events.
03
iosMain is read-only
In KMP projects, the iosMain source set is treated as read-only. The BuilderAI system prompt explicitly forbids writes. Any iosMain path in a generated diff triggers an IosMainViolation and is rejected.
04
ProcessBuilder uses explicit arg arrays
All system process execution uses explicit argument arrays -- never shell string concatenation via /bin/sh -c. This prevents command injection attacks at the architecture level.
05
LLM prompts are hardcoded constants
System prompts for PlannerAI, BuilderAI, and CriticAI are compile-time constants. They are never derived from user input. User content is always placed in the user turn, never the system turn.
06
PII scanner runs on every request
PiiScanner.redact() runs on all source code before any LLM call. Detects and redacts email addresses, phone numbers, API keys, and other sensitive data. No exceptions.
Architecture

How data flows through the platform

The plugin runs inside Android Studio on the developer's machine. When a flow unit reaches an AI phase, the plugin sends the code directly to the LLM provider API using the developer's own key.

Our backend server never sees source code. It receives only metadata: flow unit status changes, phase transitions, team reports, and LLM usage logs (token counts and costs, not content).

API keys for LLM providers are stored in the OS keychain via IntelliJ's PasswordSafe API. They are never written to settings files, environment variables, or log output.

Data flow architecture
Developer IDE
Plugin + Source Code
-->
LLM Provider API
Your Key (BYOK)
Source code flows direct -- never through our servers
Developer IDE
Plugin
-->
Backend Server
Metadata Only
Status updates, reports, FU lifecycle -- no source code
Source Code
to Backend
X
Backend Server
This path does not exist. Blocked by design.
Compliance

Standards and practices

OWASP Top 10
Architecture reviewed against OWASP Top 10. Command injection prevented by Rule 4. XSS mitigated by no user-derived HTML in LLM prompts.
Encryption
All API communication over TLS 1.2+. Passwords hashed with bcrypt. JWTs signed with HS256. Credentials encrypted at rest in OS keychain.
GDPR and DPA
Data Processing Agreement available. GDPR compliance documented. Minimal data collection -- we only store what the platform needs to function.
PII Detection
PiiScanner detects emails, phone numbers, API keys, and other sensitive patterns. Runs on all source content before LLM calls. No exceptions.
Role-Based Access
5 team roles with granular module masking. Enforced at API, UI, and middleware layers. No role can escalate beyond its defined permissions.
Audit Trail
Every flow unit generates a complete event timeline -- 30+ event types covering creation, phase transitions, AI calls, approvals, and errors.

Security questions?

Read our privacy policy, DPA, and GDPR documentation. Or contact us directly for security assessments and compliance inquiries.

Privacy Policy Contact us