AI agents you can trust.
AI agents that create trust.
We augment agentic AI systems with an engine that evaluates every planned action – ensuring AI agents reliably act in the best long-term interest of individuals, businesses, and society.
Agentic AI has the potential to account for a significant share of economic value creation, thereby boosting productivity and prosperity – provided we can trust it to act in the best interest of individuals, businesses, public institutions, and society.
We enable the building of trust in agentic AI systems by developing ethical intelligence as an integral component and defining characteristic of agent architecture. This allows us to create AI systems that proactively protect us from consequential mistakes.
Conventional AI agents execute tasks without questioning the consequences – unintended outcomes go undetected.
Actions in the real world – sending emails, modifying databases, executing transactions – are carried out without any systematic risk assessment.
Even advanced models can be talked out of their safety principles through clever prompting – there is no architectural guarantee.
No language model, however large, can guarantee that a risk and impact assessment takes place before every action. That requires architecture, not just training.
Technology & Platform
Developer of Alan – the sovereign AI platform for enterprises. Developed and operated in Germany, 100 % independent from US services. The GenAI solution from Germany, built for Europe.
Philosophy, Ethics & Strategy
Making ethics visible – innovation with integrity. Founded by Prof. Dr. Markus Gabriel, one of the world's most influential philosophers, deep-IN translates philosophical principles into practical organizational frameworks.
Trust Agents augment agentic systems with an architectural engine that guarantees a systematic evaluation before every tool invocation.
Unlike pure model training, our approach provides a 100% guarantee that for every action – whether sending an email, accessing a database, or executing a transaction – a comprehensive evaluation – covering risk, trade-offs, compliance, ethics, and potential escalation takes place. This evaluation happens in milliseconds – with near-zero latency overhead.
The Trust Engine sits architecturally between planning and execution in the agent loop. This ensures that no agent acts without evaluation – regardless of the underlying language model.
The agent receives a task and creates a plan
Determining the required tools and actions
Risk assessment, trade-off analysis, escalation if required
Only verified actions are executed
Like an ideal employee, a Trust Agent proactively thinks through the consequences of its actions – across multiple dimensions.
Detection of unintended consequences of planned actions before they are carried out.
Transparent representation of trade-offs between short-term and long-term impacts, costs, and risks.
Surfacing alternative courses of action that the user may not have considered.
Active prevention of reputational damage, relationship harm, and ethical risks.
Checking for conformity with applicable law – adapted to the relevant jurisdiction and industry.
Adherence to industry-specific frameworks such as DORA (Banking), GDPR, and further regulations.
Integration of company-specific values, policies, and communication standards.
Consideration of your customers' customers' interests for a holistic chain of trust.
Good intentions are not enough. These scenarios show how Trust Agents detect unintended consequences – before they occur.
"Send the current quarterly report with performance appraisals for all employees to all department heads."
Finds 23 department heads in the directory, including recently promoted individuals.
Sends report containing salary figures, sick-leave rates, and performance appraisals to all 23 recipients.
3 recipients are not authorised to access cross-departmental salary data. Sick-leave data constitutes a special category under Art. 9 GDPR. GDPR notification obligation (72h) is triggered.
Finds 23 department heads in the directory, including recently promoted individuals.
Identifies: Sensitive data categories (Art. 9 GDPR), missing authorisation check for 3 recipients, unencrypted email transmission.
Stops sending. Lists affected recipients. Suggests: conduct authorisation check, anonymise document, use encrypted individual dispatch.
"With immediate effect, inform all our customers about the new pricing structure taking effect on the 1st of next month."
Composes a price-change letter and prepares a mass mailing to 8,200 customers.
Sends the price-change notification to the entire customer base.
1,400 customers have ongoing fixed-price contracts with a price guarantee. Legal claims arise, a wave of cancellations follows, trust is damaged.
Composes a price-change letter and prepares a mass mailing to 8,200 customers.
Identifies: 1,400 customers with fixed-price contracts, varying contract durations, missing segmentation of the customer base.
Stops mass mailing. Segments customer base. Suggests two separate communications: one for existing contract holders (no price change), one for new customers.
"Send our new offer to all existing customers."
Identifies 3,400 existing customers with an email address in the CRM.
Executes send_email() for all 3,400 recipients – without checking for marketing consent.
Hundreds of recipients have not given marketing consent. Legal warnings, GDPR complaints, and reputational damage loom.
Identifies 3,400 existing customers with an email address in the CRM.
NO. Commercial mass mailing planned. Marketing consent of recipients has not been verified. Violation of UWG § 7 and GDPR Art. 6 is likely.
Sending stopped. Marketing team reviews consent status and approves only compliant recipients.
Trust Agents are particularly valuable in industries where mistakes have serious consequences and regulatory requirements are high.
DORA compliance, transaction security, and customer advice to the highest standards.
Risk assessment, regulatory compliance, and responsible handling of sensitive customer data.
The highest safety standards, compliance, and ethically grounded decision-making processes.
Municipal utilities, public authorities – responsible deployment of AI in the service of citizens.
The highest ethical standards for high-stakes decisions. Trade-off analyses at the critical level.
Process automation with a built-in safety layer for quality control and compliance.
A cooperation between Comma Soft AG and deep-IN GmbH – philosophy, ethics, and technology combined.
Ethical Advisory, Leadership Consulting, corporate ethics guidelines, and transformation support by deep-IN GmbH.
Cybersecurity foundation, Data Governance, Enterprise Integration, and the Alan platform by Comma Soft AG.
Industry-specific modules, company-specific ethics configuration, and individual use cases – developed together.
Regular review and adjustment of the ethical layer, updates, and direct access to our experts.
We offer you the complete Trust Agent infrastructure as a comprehensive package – from ethical consulting through technical implementation to ongoing operations. A cooperation between Comma Soft AG and deep-IN GmbH.
Request packageNo language model – however large – can guarantee that a risk assessment takes place before every action. Our Trust Engine is architecturally anchored, not merely trained. This evaluation happens in milliseconds – with near-zero latency overhead.
Large models are getting better at general ethics. But regulatory specifics, industry requirements, and company-specific policies – that is the last mile that only we cover. Our rules engine is continuously reviewed and updated to reflect new regulations and evolving standards.
Rooted in European values and the EU regulatory framework. At a time when major US providers are side-lining ethics under competitive pressure, we offer the European alternative.
Alan is the sovereign enterprise AI platform from Germany – developed and operated by Comma Soft AG, 100 % GDPR-compliant, with no dependency on US hyperscalers. Trust Agents are directly integrated into Alan.
Discover Alan →Would you like to find out how Trust Agents can secure your AI strategy? We are happy to show you a free, no-commitment live demo and discuss your individual use case.