Privacy & Safety
One of the key concerns with using LLMs is that they may misuse private data or generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
- Amazon Comprehend moderation chain: Use Amazon Comprehend to detect and handle Personally Identifiable Information (PII) and toxicity.
- Constitutional chain: Prompt the model with a set of principles which should guide the model behavior.
- Hugging Face prompt injection identification: Detect and handle prompt injection attacks.
- Layerup Security: Easily mask PII & sensitive data, detect and mitigate 10+ LLM-based threat vectors, including PII & sensitive data, prompt injection, hallucination, abuse, and more.
- Logical Fallacy chain: Checks the model output against logical fallacies to correct any deviation.
- Moderation chain: Check if any output text is harmful and flag it.
- Presidio data anonymization: Helps to ensure sensitive data is properly managed and governed.