Legal

Note: this page will be updated regularly as our use of AI evolves.

Our Code of Conduct for the use of generative AI tools

At Sifted, we are excited by the potential of generative AI to enrich our storytelling and support our readers. We already embrace text-to-image generation tools to bring articles to life, and we now introduce an AI Chatbot trained on Sifted’s own content to help users navigate and discover insights more effectively. This Code of Conduct outlines our commitments, workflows and safeguards for both modalities, ensuring responsible, transparent and fair use of AI across the organisation.

Responsible and transparent use

We commit to deploying generative AI tools in ways that respect rights, mitigate biases and uphold Sifted’s editorial standards. For each AI initiative, we conduct due diligence, disclose the models or data sources in use, and remain vigilant about legal and ethical developments.

We schedule periodic reviews of our AI workflows - both for image generation and the AI Chatbot to ensure outputs remain fair, inclusive and accurate. As regulations evolve, we update our practices accordingly.

We welcome feedback and criticism from the community regarding any AI-driven feature. A unified feedback mechanism enables users to flag concerns about model choices, biased outputs or factual errors. Reported issues trigger timely investigations and adjustments.

How we use text-to-image generation tools

We monitor and credit the models we employ, performing thorough due diligence before use. We avoid any model explicitly trained to replicate specific artists’ work.

We recognise inherent biases (e.g. stereotypical portrayals) and adjust prompts or post-process outputs to ensure diverse and inclusive representation; for instance, avoiding solely male depictions of VCs when generating related images

Our typical workflow involves:

  • Exploring multiple prompts without referencing names of living artists.
  • Inpainting to add or remove elements.
  • Iterating via image-to-image refinements.

This often entails 10–20 prompt iterations; rather than publishing every prompt, we share general guidelines and train our team on effective, bias-aware prompting.

We provide a public form for readers to offer suggestions or report concerns about our use of image-generation tools. All feedback is reviewed, and if misconduct or unintended harm arises, we investigate and take corrective action

Our AI Chatbot

Our AI Chatbot is currently built using Zapier and data is processed according to their Data Processing Addendum. The Chatbot can assist users in finding, summarising and contextualising Sifted articles. To align with our editorial values, we establish the following guidelines:

  • The Chatbot helps users search and explore Sifted’s published content, offering summaries, context and pointers to relevant articles. It is not a substitute for professional advice (e.g. legal or financial), and complex or sensitive enquiries should be referred to qualified experts or the original articles.
  • Queries about startup insights, market trends or clarifications of Sifted analyses are in scope. Requests beyond our remit, or requiring proprietary data, should result in a polite disclaimer and referral to human assistance.
  • We disclose that the Chatbot is trained daily on Sifted’s articles and reports, with occasional supplementation from vetted public datasets as needed.
  • When providing factual statements or data derived from our articles, the Chatbot references the original article title, publication date and, where feasible, a link back to Sifted.
  • To minimise hallucinations, the Chatbot uses a content index or retrieval mechanism that grounds responses in actual Sifted material. Please note AI-generated responses may occasionally contain inaccuracies and we recommend verifying any critical information. If information lies beyond its scope or confidence threshold, it states uncertainty and encourages users to consult the original sources.
  • We implement safeguards against malicious prompts (e.g., sandboxing, input sanitisation) to prevent unintended behaviour or data leaks.
  • Usage is monitored for abuse patterns (spam, automated scraping) and rate limits are applied to maintain system integrity.
  • For complex or sensitive queries, the Chatbot suggests consulting a human editor or specialist. It may offer to direct users to contact Sifted’s editorial team. Staff managing the Chatbot follow documented best practices: updating the knowledge base, addressing feedback, performing audits, and collaborating with legal/privacy teams.

We welcome feedback and criticism

We welcome any AI-related feedback - whether about image outputs or Chatbot responses. All submissions are treated with priority, investigated by the appropriate teams, and logged in an audit trail.

We take all feedback seriously. If we’re made aware of any wrongdoings related to a specific model, we will do a thorough investigation and decide on a course of action.

TL;DR

We are committed to using AI tools responsibly and we will continue to monitor and adapt our use of these to ensure that we are always doing the right thing. We welcome feedback and suggestions from our community and we are dedicated to being transparent about our use of these tools.

Our AI tools do not constitute technical, financial or legal advice or any other type of advice and is not intended to be relied upon by users in making (or refraining from making) any specific investment or other decisions.