Magif.ai Responsible AI Policy (v7)
Ethical AI Governance and EU AI Act Alignment for Technology Providers Using Third-Party Models
Last updated: November 12, 2025
1. Purpose
Magif.ai provides a secure, ethical, and human-centered AI platform that enables experts — including coaches, consultants, educators, and organizations — to build and deploy conversational and analytical AI experiences.
Magif.ai does not develop or train large language models (LLMs). The platform integrates and orchestrates trusted third-party AI models, including those developed by OpenAI, within a secure and transparent governance framework.
This policy defines Magif.ai's commitments to:
• Responsible AI design and integration
• Ethical data handling and transparency
• Compliance with the EU AI Act and global AI regulations
2. Scope
This policy applies to:
• All AI integrations, data processing, and automation features developed, maintained, or hosted by Magif.ai
• All employees, partners, and contractors involved in AI system management or deployment
• All experts and organizations using Magif.ai to create their own AI agents or experiences
Experts remain responsible for their own professional ethics and client practices. Magif.ai governs how AI is implemented, not how it is used in specific professional contexts.
3. AI Model Sources and Responsibilities
Magif.ai integrates third-party AI models via secure APIs — primarily:
• OpenAI GPT-4o models (for text and reasoning capabilities)
Magif.ai's Responsibilities:
• Provide safe, transparent, and compliant infrastructure for integrating these models
• Implement ethical guardrails, content filters, and user disclosures
• Deploy LLM_ANONYMISER to anonymize data before external processing
• Log and monitor AI activity for security, privacy, and misuse detection
• Communicate known model limitations and best practices to experts
Third-Party Model Providers' Responsibilities:
• Develop, train, and maintain the base AI models (architecture, data, and weights)
• Ensure those models meet their own stated safety, transparency, and legal obligations
• Address inherent risks such as bias, factual inaccuracies, or hallucinations in model behavior
Limitation of Liability:
Magif.ai cannot be held responsible for factual, ethical, or analytical errors, bias, or unpredictable behaviors originating from external foundation models (e.g., OpenAI). While Magif.ai applies safety filters, anonymization, and monitoring, the underlying model behavior remains the responsibility of the model provider.
4. Magif.ai's Ethical AI Principles
1. Transparency
We clearly disclose when AI is used and what systems are involved.
• Each agent identifies itself as AI-powered
• Documentation describes data flow, providers, and limitations
• Significant model or provider changes are communicated to users
2. Accountability
We are accountable for our platform layer — the ethical use, control, and governance of integrated AI systems. Experts are accountable for their use of AI outputs within their own practice areas.
3. Fairness and Inclusion
We test system templates for bias and encourage experts to validate fairness in their own use cases.
4. Privacy and Security
• Data encrypted in transit (TLS 1.2+) and at rest (AES-256)
• Role-based access control and multi-factor authentication
• Data never used to train external models
• Experts may export or delete their client data at any time
• LLM_ANONYMISER anonymizes sensitive data before external processing
5. Human-Centered Design
AI must serve to augment human expertise. Magif.ai enables AI features that support reflection, insight, and productivity — without replacing human judgment.
6. Reliability and Safety
• Continuous testing for stability and coherence
• Incident-response and monitoring procedures for misuse or abnormal behavior
5. Data Storage and Processing Transparency
Personal Information Storage (Paris, France 🇫🇷):
• Location: AWS infrastructure in Paris, France
• Data type: Names, email addresses, account details, user profiles
• Usage: Platform authentication and account management ONLY
• Security: AES-256 encryption at rest, TLS 1.2+ in transit
• Training: NEVER used for AI model training
• Access: Role-based access control, GDPR-compliant
• Control: Users can export or delete their data at any time
Expert Knowledge Storage (Berlin, Germany 🇩🇪):
• Location: Qdrant vector database in Berlin, Germany
• Data type: Books, articles, methods, coaching content, expert knowledge
• Usage: AI agent responses and semantic search
• Security: Encrypted storage, isolated per client
• Sharing: NEVER shared between clients or made available to others
• Provider: Qdrant (open-source vector database, Berlin-based)
• Control: Experts control their knowledge base and can delete content
Data Processing Flow:
• All data remains on Magif.ai servers (Paris & Berlin) by default
• Data ONLY leaves our servers when sent to OpenAI for AI agent processing
• This is the ONLY moment data exits our infrastructure
• Before external processing, data passes through LLM_ANONYMISER for anonymization
6. LLM_ANONYMISER: Proprietary Anonymization Engine
LLM_ANONYMISER is Magif.ai's proprietary data protection technology:
Technology:
• Built on open-source AI models
• Runs 100% on Magif.ai infrastructure
• Processes data BEFORE it leaves our servers
Functionality:
• Automatically detects personally identifiable information (PII)
• Replaces sensitive data with anonymized tokens
• Sends anonymized data to OpenAI for agent processing
• Seamlessly restores original context when receiving responses
• Provides military-grade privacy without compromising AI intelligence
Protection Scope:
• Names, locations, personal identifiers
• Sensitive business information
• Client-specific details
• Any data deemed sensitive by detection algorithms
Status: Coming Soon (In Development)
Once deployed, LLM_ANONYMISER will provide an additional layer of privacy protection for all Magif.ai users.
7. Prohibited and Restricted Uses
Magif.ai prohibits use of its technology for any unacceptable-risk activities under the EU AI Act (Regulation 2024/1689), including:
• Social scoring or surveillance
• Manipulative behavior exploiting vulnerabilities
• Emotion recognition or biometric analysis in public spaces
• Any unacceptable-risk activities under EU AI Act
All Magif.ai functions are designed as limited-risk systems. Deployments in high-risk domains (e.g., HR, education, health, financial services) require users to implement appropriate human oversight and risk management.
8. Responsible Use by Experts
Experts and organizations using Magif.ai must:
• Disclose that AI is used and explain its role
• Obtain consent when collecting personal data
• Avoid using AI for medical, therapeutic, or legal advice unless qualified
• Monitor and review AI outputs before applying them in client contexts
• Comply with applicable data and professional regulations
9. AI Governance Structure
AI Ethics & Risk Committee:
Oversees ethics, fairness, safety, and EU AI Act readiness (risk classification, conformity documentation, post-market monitoring).
Technical Security Team:
Maintains infrastructure, encryption, uptime, and incident response.
Legal & Privacy Officer:
Ensures GDPR and AI Act compliance; manages data-protection impact assessments.
Expert Relations Team:
Provides ethical AI education and collects feedback on third-party model performance.
10. EU AI Act Compliance and Roles
Under the EU AI Act:
• Magif.ai is an AI provider and integrator, offering the technical infrastructure to host and orchestrate AI services
• OpenAI is the foundation model provider, responsible for their own conformity, transparency, and technical documentation
• Experts using Magif.ai are AI deployers, responsible for lawful and ethical use in their professional context
Magif.ai's compliance measures include:
• Risk classification of features (minimal, limited, high)
• Technical documentation for integrated models, system design, and purpose
• Risk management and mitigation — safety testing and bias checks
• Human oversight enablement for all automated processes
• Transparency obligations — AI labeling and model attribution
• Traceability — logging model versions, API usage, and data flow
• Post-market monitoring — continuous review of behavior, accuracy, and user reports
Experts remain responsible for ensuring that AI use cases comply with EU AI Act deployer requirements, especially when operating in sensitive or high-risk domains.
11. Data Governance and Lifecycle
Collection: Minimal, purpose-bound data collection
Storage: Compliant, encrypted, EU-based data centers (Paris & Berlin)
Access: Restricted by role and need-to-know principle
Retention: Controlled by the expert's configuration; deletable on demand
Deletion: Verified erasure with system logging
External Processing: Data only leaves servers when sent to OpenAI for AI agent functionality
Magif.ai conducts annual data-flow reviews to ensure transparency and integrity.
12. Explainability, Traceability, and Documentation
Magif.ai maintains transparency by:
• Providing plain-language summaries of integrated model types and logic
• Offering model cards describing AI sources (OpenAI)
• Logging version histories, API activity, and major configuration updates
• Making documentation available to enterprise clients and regulators upon legitimate request
13. Post-Market Monitoring
Magif.ai continuously monitors AI behavior for reliability, safety, and bias. Findings are reviewed by the Ethics & Risk Committee and corrective actions are taken as needed. If required under the EU AI Act, significant incidents will be reported to supervisory authorities.
14. Continuous Improvement
We regularly update our systems and documentation to meet evolving standards in ethical AI, privacy, and compliance. Quarterly reviews evaluate performance, transparency, and user satisfaction.
15. Transparency Artifacts
Available upon request or via the Magif.ai Trust Center:
• Privacy Policy & Terms of Service (reviewed <18 months)
• Security & Reliability Overview
• AI Governance and Ethics Framework
• Data-Flow Diagram / System White Paper
• Model Transparency Summary (OpenAI integration)
• Responsible AI Use Guide for Experts
• EU AI Act Compliance Statement
Magif.ai is preparing for EU AI Act conformity assessments and will issue a Declaration of Conformity and CE marking once applicable modules become effective.
16. Reporting and Incident Management
Users can report bias, privacy issues, or misuse to
[email protected].
Incidents involving model behavior from third-party providers will be forwarded to those providers (OpenAI) as part of collaborative post-market monitoring.
17. Policy Review
This policy is reviewed annually and whenever material changes occur in AI models, providers, or regulations. Magif.ai communicates updates transparently to users and partners.
Summary
Magif.ai builds ethical, transparent, and compliant AI infrastructure powered by trusted external AI models. We provide the governance, safety, and transparency framework; experts provide the professional application. Responsibility for foundational model performance lies with the original model providers. Together, we promote responsible, human-centered, and trustworthy AI across all domains.
Company Information
VW-CODING, SASU with a capital of 4000€, registered with the RCS of Paris under number 914 650 387, headquartered at 121 quai de Valmy, 75010 Paris, VAT number FR41914650387.
Contact Information
Ethics & Compliance:
[email protected]
General Support:
[email protected]
Website: www.magif.ai
Data Protection Officer (DPO): Available upon request at
[email protected]