Assurance - Customer Data Confidentiality and Retention
OpenAI does not train its Models using inFeedo Customer Data
inFeedo uses the API version - a B2B enterprise offering and not consumer services.
OpenAI assures through its API Data Usage Policy (last updated on 14th June 2023) and the official blog on its approach to AI safety.
No data is used for training or improving OpenAI’s service offering unless inFeedo explicitly adds and informs customers sharing of API data for this purpose.
OpenAI defined Data Deletion/Retention Period
inFeedo has not opted-in to share data for training the OpenAI algorithm. OpenAI retains data sent through API for 30 days only for monitoring abuse and misuse.
Employees and contractors of OpenAI on a need-to-know basis are subjected to confidentiality and security obligations to investigate and verify suspicions in these 30 days.
The API data is automatically deleted after 30 days.
Employee Education & Training
Responsible AI usage training, quizzes, and policy attestations for employees to identify possible consequences of not using the AI technology with good judgment.
These training sessions aim to help inFolks understand the significance of safeguarding Customers' and Company's confidential information.
Ethical Review
We ensure inFeedo’s AI usage aligns with the latest Legal developments, best practices, and Ethical Guidelines.
Fairness, & Bias Mitigation: Amber is a platform that generates output basis Users’ responses/input.
Accuracy Assessment and Mitigation: We regularly assess the input and output for accuracy, fairness, and potential biases in order to take corrective actions to mitigate them.
Feedback & Edit Right: Admins have the right to edit Action items, Acknowledgment emails, and provide feedback on Summaries generated through Chats.
Opt-Out: Customers have a right to opt out of the features which may share Personal Information when using the aforementioned features
5. Compliance, Risk Management, and Governance
AI Risk Management: In May 2023, inFeedo adopted the AI Risk Management Framework (RMF) issued by NIST, and all the risks are managed as prescribed.
Testings and Scans: OpenAI APIs are scanned along with all our APIs by a Cert-In empaneled 3rd party.
3rd Party Risk Assessment: We adhere to OpenAI's terms of service, usage policies, and guidelines for API usage, ensuring a collaborative and ethical AI ecosystem.
6. Guidelines for AI Usage
Confidentiality: Before feeding any data into AI tools, carefully review and remove any personally identifiable information (PII) or sensitive data. Examples of such information include names, phone numbers, and email addresses.
Transparency: Make it explicitly clear when the AI is being used in any process or interaction, and use the interface to let the user know that AI is being used.
Mutual Agreement of Usage: If any confidential data is being shared as per mutual agreement between inFeedo and the Customer, the Data Privacy Policy of the Generative AI provider must ensure that the data is not being used for training and is not retained beyond necessary legal limits.
Evaluation and Fairness: Thoroughly assess generated outputs for any biases, inaccuracies, or unethical suggestions. Generative AI should uphold the principles of fairness, accountability, and transparency. Do spot checks on anonymously generated data, and also rely on authorized humans for feedback.
Documentation: Keep a record of inputs to and outputs from the AI. This ensures that you can monitor and analyze its performance, enhance accountability, and support transparency.
Secure Storage: Secure all AI inputs and outputs, to uphold data privacy and security regulations. Document storage should follow strict guidelines for data security.
Approved External Content: Any use of AI-generated content that is sent to human beings must be reviewed and approved by an appropriate customer and/or company authority, as applicable.
Approved Data Usage: Do not allow AI-generated content to be imported directly into official datasets unless approved by an appropriate customer and/or company authority, as applicable.
Anonymize Identifiers: Replace specific identifiers like “inFeedo,” “Amber,” or “Customer Name” with generic placeholders like “X Company,” “ABC Product,” or “Y Customer.” This ensures that no identifiable information is shared within these tools.
Generalise Information: Instead of using actual real-world examples, opt for generic or fictional scenarios when providing data for AI tools. This prevents the inclusion of sensitive or private information inadvertently.
Usage Restrictions: Do not use Company Confidential or Customer data for developing machine learning models or related technologies.
Decision Making: Don't rely solely on AI for critical decision-making: Human judgment, experience, and ethical considerations are essential in making complex decisions that involve strategic planning, innovation, and risk assessment. AI models can inherit biases from the data they are trained on, leading to unfair or discriminatory outcomes.
Human-machine collaboration: Recognize that AI is a tool to augment human capabilities, not replace them.
Log-in and security: As a policy, no unapproved AI Consumer product should be signed into using an inFeedo employee or group ID.
6. We’re here to help
For concerns regarding services reach out to help@infeedo.com
To report an incident, reach out to privacy@infeedo.com
inFeedo is committed to having meticulous, constant, and robust safeguards to ensure the Responsible and Ethical use of AI. We believe in continuous learning, adapting, and refining our AI practices by keeping abreast of the latest advancements, potential risks, and industry-validated mitigation measures in the AI space.
With Immense respect for Global Privacy Regulations, and Love for all our Patrons
Aditya Kumar (Head - Product at inFeedo)
Seema Pathak (Head - inFeedo’s Data Privacy, Risk, and Compliance Team)