

NIQ is committed to the responsible, ethical, and secure use of artificial intelligence and machine learning for the advancement of consumer intelligence technology.
The rapid progression of generative artificial intelligence has leaders across industries talking about the transformative power and potential challenges of AI. At NIQ, conversations around AI safety begin and end with trust.
Clients across the globe have trusted NIQ with their data—the essential input for effective AI—for over 100 years. Today, we empower clients across 90+ countries, 1.4M+ stores, and trillions of records with artificial intelligence and machine learning (ML) that drive actionable insights.
You can trust that we’re applying and adapting our rigorous standards for data security, quality, and privacy to the creation and use of AI solutions for consumer intelligence. Our Principles for AI Safety and principles, rooted in science and expert human intelligence, ensure that every aspect of this work is effective, ethical, and safe.
NIQ is committed to the responsible use of AI and upholds these principles in all GenAI-related activities, from general business practices to technical and operational processes:
GenAI should only be used to benefit people, companies, and the industry we serve. It should be used in a manner that is compassionate, non-discriminatory, and ethical.
GenAI activities should comply with all current and future laws, regulations, and industry standards, respecting variations in geographical locations and types of data used.
The design, development, and deployment of GenAI systems should be transparent and offer clear explanations for design decisions, data usage, and biases or limitations.
Users and creators of AI solutions should have regular training and knowledge-sharing sessions on GenAI ethics to promote awareness and understanding.
AI and ML solutions can produce unpredictable information—all AI outputs should have human oversight to verify its accuracy.
GenAI systems should operate safely throughout their lifecycles with ongoing risk assessment and management. The privacy rights of individuals must be protected—data collection, storage, and processing activities should comply with applicable privacy laws and regulations.
NIQ follows a set of guidelines for general AI safety, which include:
GenAI use complies with our strict Acceptable Use Policy, protecting client data, product details, and employee information. Personal accounts are never used to process data on any GenAI tool.
NIQ explores GenAI tools thoughtfully, with careful attention to their limitations and capacity for inaccuracy. We thoroughly assess their output before business use.
NIQ employs only pre-approved tech for client services, ensuring quality and compliance.
To innovate securely with GenAI, NIQ’s technical teams follow these four guidelines:
NIQ’s GenAI technology and models prioritize user data protection. We take careful measures to limit and protect sensitive information, aligning with corporate data policies and complying with all relevant data protection laws and intellectual property rights and regulations.
When testing new systems, algorithms, or models, NIQ explores GenAI securely limiting data risk, and using only what’s necessary for achieving specific outcomes.
NIQ validates the outcomes of GenAI models using rigorous methods including adversarial testing and fact verification.
NIQ maintains a vigilant stance to protect data using a zero-trust security approach with tools.
For safe and efficient use of GenAI tools (ChatGPT, Bard, DALL·E, etc.), NIQ employees follow these guidelines:
People retain control, especially for critical tasks. NIQ calibrates the level of control smart programs have based on task complexity, ensuring trust in their actions.
NIQ establishes secure environments to test GenAI models rigorously, ensuring close monitoring and safety checks before broad implementation.
NIQ’s systems track and oversee smart programs, enhancing our understanding and enabling intervention if necessary.
Should issues arise, NIQ has the capability to rectify mistakes made by smart programs, maintaining the ability to step in and reverse actions if needed.