Skip to main content
All CollectionsinFeedo's use of AI
How inFeedo uses OpenAI GPT API
How inFeedo uses OpenAI GPT API
inFeedo avatar
Written by inFeedo
Updated over a year ago

Assurance - Customer Data Confidentiality and Retention

  1. OpenAI does not train its Models using inFeedo Customer Data

  • inFeedo uses the API version - a B2B enterprise offering and not consumer services.

  • OpenAI assures through its API Data Usage Policy (last updated on 14th June 2023) and the official blog on its approach to AI safety.

  • No data is used for training or improving OpenAI’s service offering unless inFeedo explicitly adds and informs customers sharing of API data for this purpose.

  1. OpenAI defined Data Deletion/Retention Period

  • inFeedo has not opted-in to share data for training the OpenAI algorithm. OpenAI retains data sent through API for 30 days only for monitoring abuse and misuse.

  • Employees and contractors of OpenAI on a need-to-know basis are subjected to confidentiality and security obligations to investigate and verify suspicions in these 30 days.

  • The API data is automatically deleted after 30 days.

  1. Employee Education & Training

  • Responsible AI usage training, quizzes, and policy attestations for employees to identify possible consequences of not using the AI technology with good judgment.

  • These training sessions aim to help inFolks understand the significance of safeguarding Customers' and Company's confidential information.

  1. Ethical Review

We ensure inFeedo’s AI usage aligns with the latest Legal developments, best practices, and Ethical Guidelines.

  1. Fairness, & Bias Mitigation: Amber is a platform that generates output basis Users’ responses/input.

  2. Accuracy Assessment and Mitigation: We regularly assess the input and output for accuracy, fairness, and potential biases in order to take corrective actions to mitigate them.

  3. Feedback & Edit Right: Admins have the right to edit Action items, Acknowledgment emails, and provide feedback on Summaries generated through Chats.

  4. Opt-Out: Customers have a right to opt out of the features which may share Personal Information when using the aforementioned features

5. Compliance, Risk Management, and Governance

  1. AI Risk Management: In May 2023, inFeedo adopted the AI Risk Management Framework (RMF) issued by NIST, and all the risks are managed as prescribed.

  2. Testings and Scans: OpenAI APIs are scanned along with all our APIs by a Cert-In empaneled 3rd party.

  3. 3rd Party Risk Assessment: We adhere to OpenAI's terms of service, usage policies, and guidelines for API usage, ensuring a collaborative and ethical AI ecosystem.

6. Guidelines for AI Usage

  1. Confidentiality: Before feeding any data into AI tools, carefully review and remove any personally identifiable information (PII) or sensitive data. Examples of such information include names, phone numbers, and email addresses.

  2. Transparency: Make it explicitly clear when the AI is being used in any process or interaction, and use the interface to let the user know that AI is being used.

  3. Mutual Agreement of Usage: If any confidential data is being shared as per mutual agreement between inFeedo and the Customer, the Data Privacy Policy of the Generative AI provider must ensure that the data is not being used for training and is not retained beyond necessary legal limits.

  4. Evaluation and Fairness: Thoroughly assess generated outputs for any biases, inaccuracies, or unethical suggestions. Generative AI should uphold the principles of fairness, accountability, and transparency. Do spot checks on anonymously generated data, and also rely on authorized humans for feedback.

  5. Documentation: Keep a record of inputs to and outputs from the AI. This ensures that you can monitor and analyze its performance, enhance accountability, and support transparency.

  6. Secure Storage: Secure all AI inputs and outputs, to uphold data privacy and security regulations. Document storage should follow strict guidelines for data security.

  7. Approved External Content: Any use of AI-generated content that is sent to human beings must be reviewed and approved by an appropriate customer and/or company authority, as applicable.

  8. Approved Data Usage: Do not allow AI-generated content to be imported directly into official datasets unless approved by an appropriate customer and/or company authority, as applicable.

  9. Anonymize Identifiers: Replace specific identifiers like “inFeedo,” “Amber,” or “Customer Name” with generic placeholders like “X Company,” “ABC Product,” or “Y Customer.” This ensures that no identifiable information is shared within these tools.

  10. Generalise Information: Instead of using actual real-world examples, opt for generic or fictional scenarios when providing data for AI tools. This prevents the inclusion of sensitive or private information inadvertently.

  11. Usage Restrictions: Do not use Company Confidential or Customer data for developing machine learning models or related technologies.

  12. Decision Making: Don't rely solely on AI for critical decision-making: Human judgment, experience, and ethical considerations are essential in making complex decisions that involve strategic planning, innovation, and risk assessment. AI models can inherit biases from the data they are trained on, leading to unfair or discriminatory outcomes.

  13. Human-machine collaboration: Recognize that AI is a tool to augment human capabilities, not replace them.

  14. Log-in and security: As a policy, no unapproved AI Consumer product should be signed into using an inFeedo employee or group ID.

6. We’re here to help

inFeedo is committed to having meticulous, constant, and robust safeguards to ensure the Responsible and Ethical use of AI. We believe in continuous learning, adapting, and refining our AI practices by keeping abreast of the latest advancements, potential risks, and industry-validated mitigation measures in the AI space.

With Immense respect for Global Privacy Regulations, and Love for all our Patrons

Aditya Kumar (Head - Product at inFeedo)

Seema Pathak (Head - inFeedo’s Data Privacy, Risk, and Compliance Team)

Did this answer your question?