KI meets Regulation
- oliverluerssen7
- Dec 7
- 2 min read

Here is the English translation of your text:
1. Legal and Regulatory Requirements
When using AI, companies must ensure compliance with applicable laws and supervisory regulations:
EU AI Act (entering into force 2024–2026)
Classification of AI systems by risk (minimal, limited, high, unacceptable risk)
Documentation obligations for high-risk AI
Requirements for risk assessments, monitoring, and logging
Transparency obligations toward users
Other legal areas
GDPR: data processing, profiling, data subject rights
Copyright law: use and training of models
Liability law: who is liable for incorrect AI decisions?
BaFin requirements (financial sector): MaRisk, BAIT, ZAIT, VAIT
Competition law: preventing unintended price collusion through AI
2. Transparency and Traceability Requirements
Compliance must ensure that AI decisions are explainable and verifiable.
Requirements:
Documentation of data sources
Explainable AI (e.g., SHAP or LIME methods)
Proof of system functionality and decision pathways
Clear rules on when humans can intervene (human-in-the-loop)
The EU AI Act in particular requires complete auditability for high-risk systems.
3. Risk Management and Internal Control Systems
AI introduces new types of risks that Compliance must manage or coordinate:
AI-specific risks:
Bias and discrimination
Faulty or “poisoned” training data
Model drift (deviations over time)
Fraud & manipulation
Reinforcement learning risks
Automation errors in business processes
Compliance requirements:
Integration into the ICS (Internal Control System)
Periodic model checks
Defining risk classes for AI systems
Mandatory model validation
Incident reporting for AI-related errors
Mandatory for banks: model risk management according to BaFin/MaRisk.
4. Data Protection, Security, and Data Ethics
AI systems typically rely on large datasets—Compliance must establish guidelines for their use.
Data protection requirements:
Data minimization (only necessary data)
Privacy-by-design
Assessing personal data in training datasets
Handling sensitive data (health, finance, biometrics)
Ensuring deletion/de-identification
Security:
Protection against prompt injection
Protection against training data poisoning
Control of external AI APIs
Encryption, access control, logging
5. Governance and Responsibilities
The biggest challenge: Who is responsible?Compliance must establish organizational structures.
Requirements:
AI governance framework
Roles & responsibilities (e.g., AI Owner, Data Steward, Model Validator)
Policies for the use and development of AI
Guidelines for the use of generative AI (e.g., ChatGPT, Copilot)
Employee training
Approval processes for new AI systems
Whistleblowing channels for AI-related violations
Also important: AI ethics guidelines ("fair, transparent, responsible").



Comments