Skip to content

Generative AI Tools and Your Agency

Many new tools are emerging that utilize Generative Artificial Intelligence (GenAI). While these tools have the potential to significantly streamline many functions for agents and potentially reduce their E&O exposure, use caution when engaging this new technology.

Generative Artificial Intelligence (GenAI): Deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on. GenAI models use prompts to guide content generation and use transfer learning to become more proficient. Examples of GenAI tools include ChatGPT, Bard, Cohere, Copy.ai, Scribe, and Claude.

Gen AI Tips to Keep in Mind

  • Education is essential. Agencies should understand the positives and negatives to using Generative AI tools. First, these tools are only as good as the data they consume, can be prone to unfair bias, and have been shown to “hallucinate,” creating false information when they are unable to produce an accurate answer. They can also be manipulated with malicious input to provide certain types of responses over others – including dangerous or unethical responses. Always exercise caution and strive to verify that information is accurate and unbiased.
  • Agency management must develop a Generative AI Use Policy that provides clear guidance on how GenAI tools should and should not be used at the agency. Guidance should be simple to follow and clearly define what use is and is not allowable. Agency management will need to determine their comfort level with AI use by employees when developing this policy. View examples of a Generative AI Use Policy and a Generative AI Prohibited Use Policy (reprinted with permission of Insurance Agents & Brokers).
  • Privacy and data security are significant concerns when using Generative AI tools. Many tools will utilize your data to help further train the AI and can be prone to sharing data or data breaches. Additionally, there are already examples of employees inadvertently exposing Personally Identifiable Information (PII) and proprietary information.

    Some things to consider:
     

    • Protect confidential and proprietary information. Do not input this information into Generative AI tools that can potentially expose the information to the public.
    • Adhere to copyright laws.
    • Update agency privacy and data retention policies to contemplate AI tools. Review any contracts with AI tool providers carefully to determine how your data will be used and protected. Review your Cyber Insurance policy for any provisions related to AI use, too.
    • Know that Generative AI tools will collect data such as IP addresses, browser types, and usage, which may then be shared with third parties.
    • Make clients aware that they are interacting with AI and what your data usage policies are if you are using a tool such as a chatbot.
    • Use password protection practices and two-factor authentication (2FA)/multi-factor authentication (MFA).

 

This is quickly changing technology, so keeping yourself and your staff educated is vital.

Please note that insurance is a heavily regulated industry. The National Association of Insurance Commissioners (NAIC) has issued a model bulletin on the use of Artificial Intelligence and individual states have done the same. While these are directed at insurers, many of the same concepts apply to insurance agencies, and agencies should be aware of the status of potential regulation. We recommend that you consult with an attorney or other professional on the use of GenAI and other Artificial Intelligence.

Reprinted from UTICA National June 2024 Newsletter

Archives

Scroll To Top