Release Highlights
What’s New in AI Developer Edition 1.1.0
Protegrity AI Developer Edition is a lightweight, containerized sandbox. It helps developers, data scientists, and architects to quickly explore and integrate prototype data protection and discovery workflows. It does not require setting up a complex infrastructure and managing its operational overhead.
It is a self-contained, Docker-based environment designed to enable a user to have a hands-on experimentation without the need for enterprise infrastructure. With modular architecture, built-in sample data, and a developer-first experience, AI Developer Edition is ideal for evaluating Protegrity’s capabilities in a fast, flexible, and frictionless way.
Protegrity AI Developer Edition is designed to help a developer move quickly from idea to implementation, using familiar tools, sample apps, and open APIs.
It provides a streamlined environment to:
AI Developer Edition runs entirely on Docker, making it easy to spin up, tear down, and iterate quickly. It helps the user build a proof of concept, validate integration points, and get familiar with Protegrity’s core concepts. This edition provides the tools to set up the product fast and independently.
Note: This product is not meant for production use, but it is the perfect launchpad for innovation.
AI Developer Edition is purpose-built for fast, frictionless exploration of Protegrity’s core capabilities.
The following features make it ideal for prototyping and integration:
Modular, Containerized Architecture: AI Developer Edition runs on Docker, making it easy to test, isolate, and iterate.
Sample Apps and Data: Jumpstart evaluation with ready-to-run sample apps that demonstrate real-world use cases. These include finding sensitive data in unstructured text or finding and redacting or finding and protecting/unprotecting sensitive data.
Python Module: An open-source Python module providing APIs to protect, unprotect, and reprotect sensitive data in Python-based applications. It is available through PyPI for easy installation.
Java Library: An open-source Java library providing APIs to protect, unprotect, and reprotect sensitive data in Java-based applications. It is distributed using Maven Central for easy integration.
Lightweight: No Enterprise Security Administrator (ESA). No orchestration overhead. Just deploy the container and use the sample application.
Data Discovery: This container identifies and classifies sensitive data. It uses built-in and custom classifiers to detect sensitive data with confidence scoring.
AI Developer Edition API Service: A service hosted by Protegrity that allows developers to interact with Protegrity’s protection and discovery services through intuitive endpoints. It supports protection and unprotection of sensitive data, enabling rapid prototyping and testing of data protection scenarios without needing full-scale infrastructure. Registration is required for this service. The credentials can be obtained for free.
Synthetic Data: This container analyzes a data set and generates data that mimics the properties of real data, such as data types, ranges, correlations, and distributions. It does not contain any actual personal information.
Semantic Guradrails: It is a security guardrail engine for AI systems. It evaluates risks in GenAI systems such as chatbots, workflows, and agents, through advanced semantic analytics and intent classification to detect potentially malicious messages.
Note: This product is continuously improving. The features mentioned here are either already available or will be available shortly.
The primary personas who benefit most from AI Developer Edition.
| Persona | Role Description | Goals | Typical Activities |
|---|---|---|---|
| Application Developer | Builds and integrates applications that handle sensitive data. | - Embed protection APIs. - Prototype quickly. - Validate integration points. | - Run sample apps. |
| Data Scientist / ML Engineers | Works with sensitive datasets in analytics and machine learning workflows. | - Discover and classify PII. - Protect training data. - Ensure compliance. | - Use discovery APIs. - Integrate with Jupyter notebooks. - Test module. |
| Solution Architect | Designs end-to-end data protection strategies across systems and teams. | - Evaluate platform fit. - Define architecture. - Guide implementation. | - Review sample apps. - Test modular deployment. - Assess performance. |
| Security / Privacy Lead | Ensures data protection aligns with compliance and governance requirements. | - Understand protection methods. - Validate policy behavior. - Review audit paths. | - Inspect logs. - Simulate policy scenarios. - Review discovery results. |
A range of use cases across Data Protection, Security, and emerging GenAI-driven applications are supported.
These use cases focus on helping developers and data scientists secure sensitive data in conventional applications, services, and pipelines.
| Use Case | Description |
|---|---|
| Find and Redact | Discover sensitive data using Data Discovery API and redact or mask them. |
| Find and Protect | Discover sensitive data using Data Discovery API and protect (tokenize or encrypt) them. |
| Sample App Prototyping | Use prebuilt apps to simulate real-world scenarios like protecting PII unstructured text. Helps accelerate evaluation and integration. |
| Python Module and Java Library Integration | Integrate protection APIs into Python and Java using lightweight modules. Useful for embedding Protegrity into existing development pipelines. |
| API Evaluation | Directly test protection and discovery APIs using tools like Postman or curl. Enables low-friction exploration of Protegrity’s core capabilities. |
AI Developer Edition supports emerging GenAI workflows where sensitive data may be used in prompts, training datasets, or inference pipelines. These use cases help developers and data scientists ensure privacy and compliance when working with large language models (LLMs) and AI-driven applications.
The Semantic Guardrails feature and samples are provided with the Develper Edition. The use cases listed here are potential applications that users can develop using the feature.
| Use Case | Description |
|---|---|
| Chatbot Input Protection | Protect sensitive user inputs, such as names, emails, IDs, before passing them to GenAI models. Ensures privacy compliance in conversational AI workflows. |
| Prompt Sanitization | Automatically detect and mask PII in prompts used for LLM-based applications. Helps reduce risk in prompt engineering and inference. |
| Training Data Anonymization | Discover and redact sensitive fields in datasets used to train GenAI models. Supports responsible AI development practices. |
| Training Data Synthetic Data | Generate datasets to train GenAI models. The dataset generated can be adjusted for various scenarios. |
| Notebook-Based Experimentation | Use Jupyter notebooks to test protection and discovery workflows in GenAI pipelines. Ideal for data scientists working with unstructured or semi-structured data. |
These use cases are especially relevant for teams building AI-powered tools that interact with real-world user data, where privacy and data protection are critical.
What’s New in AI Developer Edition 1.1.0
Was this page helpful?