Customizing the sample application

The settings for running the sample application.

The steps mentioned in this section are optional. The sample application can run to detect and redact the data with the default configuration. For example, a change in the name of the input or output file.

Sample application customization for Python and Java

Note: From the samples directory, use .py file for Python. For Java Linux or macOS, use .sh file and for Java Windows, use .bat file.

Specifying the source file

The source file contains the data that must be processed. This file can have a paragraph of text or a table with values. Protegrity Developer Edition can process various files. However, for security reasons, certain characters are not processed and rejected. To enable or disable these security settings, refer to the section Input Sanitization. This version of the release only supports files containing plain text.

To specify the source file:

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.
  2. Open the sample-app-find-and-redact.py file from the /samples/python/ directory.
  3. Locate the following statement.
    input_file = base_dir / "sample-data" / "input.txt"
    
  4. Update the path and name for the source file.
  5. Save and close the file.
  6. Run the Python file.
  1. Navigate to the location where Protegrity AI Developer Edition is cloned.
  2. Open the SampleAppFindAndRedact.java file from the /samples/java/src/main/java/com/protegrity/devedition/samples/ directory.
  3. Locate the following statement.
    Path inputFile = sampleDataDir.resolve("sample-data").resolve("input.txt");
    
  4. Update the path and name for the source file.
  5. Save and close the file.
  6. Compile the Java code by running the following command from the /samples/java/ directory.
    ./mvnw clean package
    
  7. Run the shell script for linux.
    ./sample-app-find-and-redact.sh
    

Specifying the output file

The output file location specifies where the processed output file must be stored.

To specify the source file:

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.
  2. Open the sample-app-find-and-redact.py file from the /samples/python directory.
  3. Locate the following statement.
    output_file = base_dir / "sample-data" / "output-redact.txt"
    
  4. Update the path and name for the output file.
  5. Save and close the file.
  6. Run the Python file.
  1. Navigate to the location where Protegrity AI Developer Edition is cloned.
  2. Open the SampleAppFindAndRedact.java file from the /samples/java/src/main/java/com/protegrity/devedition/samples/ directory.
  3. Locate the following statement.
    Path outputFile = sampleDataDir.resolve("sample-data").resolve("output-redact.txt");
    
  4. Update the path and name for the output file.
  5. Save and close the file.
  6. Compile the Java code by running the following command from the /samples/java/ directory.
    ./mvnw clean install
    
  7. Run the shell script for linux.
    ./sample-app-find-and-redact.sh
    

Specifying the configuration settings

Use the config.json configuration file to specify the data that must be redacted or masked. The character that must be used for masking can also be specified.

Before you begin:

Identify the sensitive fields that are present in the source file.

  1. Open a command prompt.
  2. Navigate to the /samples/python/ directory where the sample application is extracted.
  3. Run the following command.
    python samples/python/sample-app-find.py
    
  4. View the supported entities. For a complete list of supported entities, refer to Supported Classification Entities.
  1. Open a command prompt.
  2. Navigate to the /samples/java/src/main/java/com/protegrity/devedition/samples/ directory where the sample application is extracted.
  3. Run the following command.
    ./sample-app-find.sh
    
  4. View the supported entities. For a complete list of supported entities, refer to Supported Classification Entities.

Updating the configuration file

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Specify the masking character to use in the following code.

    "masking_char": "#"
    
  4. Specify the text to use for the redacted data in the named_entity_map parameter. The following code shows the value used for the sample source file.

    "named_entity_map": {
        "PERSON": "PERSON",
        "LOCATION": "LOCATION",
        "SOCIAL_SECURITY_ID": "SSN",
        "PHONE_NUMBER": "PHONE",
        "AGE": "AGE",
        "USERNAME": "USERNAME"
    }
    
  5. Specify the operation to perform on the source file. The available options are mask and redact.

        "method": "mask"
    
  6. Save and close the file.

  7. Run the sample-app-find-and-redact.py file.

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Specify the masking character to use in the following code.

    "masking_char": "#"
    
  4. Specify the text to use for the redacted data in the named_entity_map parameter. The following code shows the value used for the sample source file.

    "named_entity_map": {
        "PERSON": "PERSON",
        "LOCATION": "LOCATION",
        "SOCIAL_SECURITY_ID": "SSN",
        "PHONE_NUMBER": "PHONE",
        "AGE": "AGE",
        "USERNAME": "USERNAME"
    }
    
  5. Specify the operation to perform on the source file. The available options are mask and redact.

        "method": "mask"
    
  6. Save and close the file.

  7. Run the shell script for linux.

    ./sample-app-find-and-redact.sh
    

Specifying the classification score threshold settings

The classification score threshold sets the minimum confidence level needed for the system to treat detected data as valid. It helps filter out uncertain matches so only high-confidence results are flagged. Adjust this threshold during setup. It is a value, such as 0.6 for 60%. Lowering it makes the system more sensitive; while raising it reduces false positives.

To set the value:

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Add the following command.

    "classification_score_threshold": 0.6
    
  4. Set the threshold to the required value.

    Note: Specify a number between 0 and 1.0.

  5. Save and close the file.

  6. Run the sample-app-find-and-redact.py file.

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Add the following command.

    "classification_score_threshold": 0.6
    
  4. Set the threshold to the required value.

    Note: Specify a number between 0 and 1.0.

  5. Save and close the file.

  6. Run the shell script for linux.

    ./sample-app-find-and-redact.sh
    

Specifying the logging parameters

The log messages are sent to the terminal. To capture logging data, transfer and save the output of the commands to a log file.

To set the logging level:

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Locate or add the following statement.

    "enable_logging": true,
    "log_level": "info",
    
  4. Ensure that logging is set to true and set the required log level that must be displayed.

  5. Save and close the file.

  6. Run the sample-app-find-and-protect.py file.

  1. Navigate to the location where Protegrity AI Developer Edition is cloned.

  2. Open the config.json file.

  3. Locate or add the following statement.

    "enable_logging": true,
    "log_level": "info",
    
  4. Ensure that logging is set to true and set the required log level that must be displayed.

  5. Save and close the file.

  6. Run the shell script for linux.

    ./sample-app-find-and-redact.sh
    

Python module and Java library configuration

The following parameters are configurable for AI Developer Edition.

ParameterDescriptionValuesExample
endpoint_urlThe Data Discovery and Semantic Guardrails endpoints.Specify a URL.- Classification API: http://localhost:8580/pty/data-discovery/v1.1/classify
- Semantic Guardrails API: http://localhost:8581/pty/semantic-guardrail/v1.0/conversations/messages/scan
named_entity_mapA dictionary or map of entities and their corresponding replacement names.Supported Classification Entitiesnamed_entity_map": { “PERSON”: “PERSON”,“PHONE_NUMBER”: “PHONE”}
masking_charThe character to be used for masking.Specify a special character.#
classification_score_thresholdThe minimum confidence level needed for the system to treat detected data as valid.Specify a number between 0 and 1.00.6
methodThe method for processing sensitive data.redact or maskmask
enable_loggingSpecify whether to enable logging.true or falsetrue

Last modified : December 10, 2025