Effective Date: 1st November 2025
Last Updated: 5th December 2025
This privacy notice is written to be accessible to both adults and young people. If you have questions about any section, please contact us for clarification, our contact details are listed below.
1. Who We Are
ChatMinder is operated by Sudnik Software Ltd, a company registered in England and Wales (company number: 15922754).
We are the data controller for your personal information. You can contact us with any privacy concerns at:
- Email: support@chatminder.app
- Phone: 07347 255677
2. What ChatMinder Does
ChatMinder is designed to help children communicate safely with friends and family through supervised messaging.
All messages are subject to parental/guardian supervision and approval before being sent or received. The level of supervision and approval is decided by the parent/guardian for each child.
AI-Assisted Safety: In addition to parental controls, ChatMinder uses artificial intelligence (AI) technology to help identify potentially unsafe content in messages. This AI moderation works alongside parental supervision to provide an extra layer of protection. AI is a tool to support, not replace, parental oversight. Parents remain in full control of their child’s messaging experience. And you can opt out of this at any time.
3. Information We Collect
3.1 Child Users (Under 18)
When we refer to a child we mean anyone under the age of 18. This is in accordance with the UN Convention on the Rights of the Child which defines a child as everyone under 18 unless, “under the law applicable to the child, majority is attained earlier” (Office of the High Commissioner for Human Rights, 1989). The UK has ratified this convention.
We collect minimal information necessary to provide our service safely:
Account Information:
- First name
- Age bracket
- Approved contact list
Communication Data:
- Messages sent and received (before and after moderation)
- Message timestamps
- Communication patterns for safety monitoring
- AI-generated content flags and safety scores
- Keywords or patterns identified by automated moderation
- False positive/negative feedback on AI moderation decisions
Technical Information:
- Device type and operating system
- App usage data and error logs
- IP address (for security purposes only)
3.2 Parent/Guardian Users
When we refer to Parent or Guardian, we mean someone who, according to the law in the child’s country of residence, has the legal rights and responsibilities for a child that are normally afforded to parents. This will not always be a child’s ‘natural parents’ and parental responsibility can be held by more than one natural or legal person.
Account Information:
- Full name and email address
- Phone number
- Relationship to child
Verification Information:
- Payment information (processed by third-party payment processors)
4. Legal Basis for Processing
We process personal data under the following legal bases:
For Children’s Data:
- Parental consent (GDPR Article 6(1)(a) and 8)
- Contract performance (GDPR Article 6(1)(b))
- Compliance with legal obligations (GDPR Article 6(1)(c))
For Parent/Guardian Data:
- Consent (GDPR Article 6(1)(a))
- Contract performance (GDPR Article 6(1)(b))
- Compliance with legal obligations (GDPR Article 6(1)(c))
5. How We Use Your Information
We use personal data only for:
- Enabling supervised communication between approved contacts
- Parental/guardian oversight and message approval
- Child safety monitoring and protection
- Service improvement and technical support
- Legal compliance and safety reporting where required
- AI-powered content moderation to identify potentially unsafe content
- Improving the accuracy and effectiveness of our safety systems using aggregated and anonymized data only
We never use Parent/Guardian or children’s data for:
- Marketing or advertising
- Profiling for commercial purposes
- Automated decision-making that affects the child
- Sharing with third parties for their commercial purposes
5.1 AI-Powered Content Moderation
ChatMinder can use AI technology to analyze messages for potential safety concerns. It is possible to fully opt out of this at any time, but the functionality is available and can help protect children by identifying:
- Inappropriate or explicit content
- Potential grooming behaviors or predatory language
- Indicators of bullying, threats, or harassment
- Self-harm or suicide-related content
- Age-inappropriate material
- Other safety risks as they evolve
How AI Moderation Works:
- AI analyzes message content in real-time before it reaches the parent/guardian approval queue
- Messages flagged by AI are prioritized for parental review
- AI moderation works alongside parental controls—it does not make final decisions about message delivery
- Parents can review AI flags and provide feedback if they believe a message was incorrectly flagged
AI Training and Learning:
- Our AI is trained on general datasets designed to identify harmful content
- We do not use your child’s individual messages to train AI models
- We may use aggregated, anonymized data to improve moderation accuracy
- AI providers do not retain or use your children’s messages for their own purposes
Human Oversight:
- Flagged content may be reviewed by trained safety personnel under strict confidentiality
- Parents receive explanations for AI flags when messages are blocked or flagged
6. Data Sharing and Disclosure
6.1 Routine Sharing
- Messages are shared only between the child, approved contacts, and supervising parents/guardians
- Technical support may access data to resolve service issues (under strict confidentiality and only when asked by the parent/guardian or for investigation of specific issues)
6.2 Legal Requirements
We may disclose information when required by law or to protect child safety, including:
- Law enforcement requests with appropriate legal authority
- Child protection services
- Court orders
- Emergency situations involving immediate risk to a child’s safety
6.3 Service Providers
We work with carefully selected service providers who help us operate ChatMinder:
- Cloud hosting services: Microsoft Azure, UK (with data processing agreements)
- Payment processors: Apple, Google, Stripe (who handle payment data separately)
- Technical support services [IF EXTERNALS & list location]
- AI model provider (Azure AI) who process the chat moderation:
- Messages are shared with the AI provider for real-time content analysis
- Messages are [anonymized/pseudonymized] before AI analysis where technically possible
- The AI provider does NOT retain message content after analysis
- The AI provider does NOT use your children’s messages to train their own models
- Processing occurs under strict data processing agreements
- Processing can occur in real-time and/or in batches
All service providers are contractually bound to protect your data and use it only for specified purposes in this notice.
7. International Data Transfers
This service is only marketed to UK/EU users. Data is primarily processed within the UK.
We do not transfer data internationally, we ensure appropriate safeguards including:
- Adequacy decisions by relevant authorities
- Standard Contractual Clauses
- Certification schemes where applicable
8. Data Retention
Chat/Communication Data: Retained for 12 months after account closure
AI Moderation Data: Content flags and safety scores retained for 12 months for:
- Improving moderation accuracy and reducing false positives
- Investigating safety incidents and patterns
- Regulatory compliance and transparency reporting
- System improvement using anonymized/aggregated data only
Account Data: Deleted within 30 days of account closure
Safety Logs: Retained for up to 7 years where required for child protection purposes
Technical Logs: Deleted after 12 months unless needed for ongoing security investigations
9. Your Rights
9.1 Children’s Rights (exercised by parents/guardians)
- Access: Request copies of your child’s data
- Rectification: Correct inaccurate information
- Erasure: Request deletion of your child’s data
- Portability: Receive your child’s data in a portable format
- Restriction: Limit how we process your child’s data
- Objection: Object to processing based on legitimate interests
- Withdraw Consent: Remove consent at any time (may affect service provision)
- Explanation of AI Decisions: Request explanation of why AI flagged or blocked a message
- Human Review: Request human review of AI moderation decisions
9.2 Parent/Guardian Rights
You have the same rights listed above regarding your own personal data.
9.3 Exercising Rights
Contact us at support@chatminder.app to exercise any rights. We’ll respond within one month and may request identity verification.
10. Age Verification and Parental Consent
- Children under 13: Explicit parental consent required before account creation
- Children 13-17: Parental consent required with additional identity verification
- Parents can withdraw consent at any time, resulting in account deletion
11. Safety Features and Online Safety Act Compliance
ChatMinder includes multiple safety measures:
- All messages can require parental approval before sending or receiving
- Automated content filtering for inappropriate material
- Easy reporting tools for concerning content
- Regular safety assessments and updates
- Clear escalation procedures for child protection concerns
We comply with the UK Online Safety Act by implementing:
- Robust age verification systems
- Proactive content monitoring
- Swift response to harmful content reports
- Regular risk assessments for child safety
- Transparency reporting on safety measures
11.1 Automated Content Filtering Details
AI Detection Capabilities:
- Identifies explicit sexual material, violent content, and graphic imagery
- Recognizes grooming patterns and predatory language tactics
- Detects bullying, harassment, threats, and intimidation
- Flags self-harm, suicide, or eating disorder-related content
- Identifies age-inappropriate material and dangerous challenges/trends
Performance and Accuracy:
- Our AI moderation system is regularly tested and updated
- We maintain transparency about system limitations and continuously work to improve performance
Handling Flags and Appeals:
- When AI flags a message, parents receive a notification explaining the concern
- Parents can review the flagged content and context
- Parents can provide feedback if they believe a flag was incorrect (false positive)
Transparency:
- Parents can view their child’s moderation history in the app
- We provide clear explanations of what triggered each AI flag
- Settings allow parents to adjust sensitivity levels for certain content types
- Regular reports show moderation statistics (number of messages scanned, flagged, blocked)
12. Data Security
We protect your information through:
- Encryption in transit for all messages
- Secure data centers with physical and digital security
- Regular security audits and penetration testing
- Staff security training and background checks
- Incident response procedures
- Secure storage and access controls for AI training data
- Anonymization and pseudonymization of data used for AI model improvement
13. Cookies and Tracking
We use minimal cookies necessary for:
- Account authentication
- Service functionality
- Basic analytics (anonymized)
We do not use advertising cookies or track children across other websites.
14. Changes to This Notice
We’ll notify you of significant changes through:
- Email notification to registered parents/guardians
- In-app notifications
- Updates posted on our website
Continued use of ChatMinder after changes constitutes acceptance of updated terms.
15. Complaints and Concerns
If you have concerns about how we handle personal data please contact us first, we hope we can resolve your concerns: support@chatminder.app
You have the right to complain to the appropriate Regulatory Authority, this would be the Information Commissioner’s Office (ico.org.uk) for the UK, or another may be more relevant if you are in another EU jurisdiction.
Child Safety Concerns:
For immediate child safety issues, contact relevant local authorities. In the UK this might be:
- NSPCC (0808 800 5000)
- Childline (0800 1111)
Neither NSPCC or Childline are affiliated with Chatminder.
16. Contact Information
General Inquiries:
support@chatminder.app
Privacy Questions:
support@chatminder.app
Child Safety Team:
support@chatminder.app